By Shirin Ghaffary, Bloomberg
Compiled by: Peggy, BlockBeats

Editor's Note: This article is translated from a Bloomberg interview with Peter Steinberger, founder of OpenClaw. Since joining OpenAI, he has been involved in driving the development of next-generation AI agent technology. The direction of enabling AI to go beyond simply answering questions, but to invoke tools, collaborate across systems, and continuously act within its environment, is becoming a new core of competition in the industry.
In this interview, he addresses several key questions: What do the different adoption paths of OpenClaw in China and the US signify? How can AI agents be improved? How can secure collaboration be achieved between individual and professional agents? How will OpenAI advance this technological direction?
The following is the original text:
The creator of OpenClaw (who recently joined OpenAI) believes that more people should try using artificial intelligence themselves and learn from it to help society better prepare for this technology. But before that…
You need to understand three things first:
• OpenAI has ended support for Sora and is gradually ending its partnership with Disney.
Apple plans to redesign Siri with AI and introduce a new interface and "Ask Siri" button in iOS 27.
Amazon acquires Fauna Robotics, entering the consumer-facing humanoid robot market.
Months after OpenClaw gained popularity, the paths taken by the United States and China in embracing cutting-edge artificial intelligence products have diverged significantly, and this difference may have a profound impact on the technological competition landscape between the two countries.
In China, from students and working professionals to the elderly, more and more people are starting to try using OpenClaw, and some companies are even requiring their employees to use the product. Although regulators have begun to restrict its application in state-owned enterprises and government agencies, China as a whole is still becoming a large-scale testing ground—allowing AI systems to gradually take over people's digital lives.
In contrast, while OpenClaw (formerly known as Moltbot and Clawdbot) has garnered significant attention among developers and early users in the United States, it has yet to achieve the same level of popularity among the general public. Some US companies have even begun restricting employee access to such AI agent tools due to security concerns.
This starkly different market response also caught the attention of the founders of OpenClaw.
“In the US, I feel that in some companies, if you use OpenClaw, you might get fired,” said Peter Steinberger, an Austrian software engineer who developed the tool and now works at OpenAI on AI agent-related technologies. “In China, however, it’s the exact opposite in many companies—you might get fired if you don’t use OpenClaw.”
Steinberger's product was hailed by Jensen Huang (CEO of Nvidia) as "perhaps the most important software release ever." However, he also acknowledged that neither the US nor the Chinese path was perfect. While OpenClaw's initial goal was to automate tasks such as flight check-in and schedule management, he also pointed out that potential security risks still exist.
"But there's no doubt we can also learn something from adopting new technologies more quickly or from embracing different risk appetites," Steinberger told me in an interview this week at OpenAI's San Francisco headquarters. "Ultimately, this technology is still too new, and the only way we can learn is by using it ourselves and trying it out."
In his new role at OpenAI, Steinberger will be involved in the development of Codex, a programming tool currently used by over 2 million users weekly. On such a high-profile platform, he understands that the market demands higher levels of product security and stability, necessitating the minimization of errors.
In our conversation, Steinberger discussed how to improve AI agents, OpenAI's future plans for this technology, and why, with the support of his new employer, he is continuing to maintain OpenClaw as an open-source project and plans to entrust it to a soon-to-be-established foundation. The following interview has been appropriately edited and abridged without altering the original meaning.
Bloomberg: Sam Altman called you a "genius" and said you would drive the development of the next generation of personal AI agents. What exactly will that look like at OpenAI?
Steinberger: We are rapidly moving towards a future where everyone has a personal agent for their private life and a work agent for their job. With OpenClaw, I'm essentially building a "window to the future," showcasing my ideal world. Of course, I'm also aware that no company can truly bring it to the masses yet, because there are some key issues that need to be resolved before that can happen.
Bloomberg: What specific questions are they?
Steinberger: In that future, my agent needs to be able to communicate with your agent. For example, I work at OpenAI and use Codex for knowledge work on a daily basis, but sometimes I need to access data in my personal "claw". So there must be a mechanism that allows my work agent to call my personal agent. At the same time, I also need to ensure that the personal agent does not disclose any information that I consider too private; and OpenAI must also ensure that internal company data is not brought back to my personal device.
Bloomberg: You've probably noticed that, for example, at Meta Platforms, there were issues with employees overusing agent tools, and now some companies are starting to tighten restrictions.
Steinberger: In the US, I feel that in some companies, if you use OpenClaw, you might get fired; while in China, many companies are the opposite—if you don't use OpenClaw, you might get fired. They even showed me a table listing each employee's name, along with a column for "What was automated today?" Companies are very proactively encouraging employees to think about how to increase efficiency tenfold.
Neither approach is perfect, but we can certainly learn something from adopting new technologies more quickly and experimenting with different risk appetites. Because this technology is so new, we can only understand it through continuous trial and error.
Even on Meta, a security researcher was heavily ridiculed on Twitter for publicly addressing the issue. I actually find that courageous. If everyone laughs at these attempts, it will only discourage more people from speaking out.
Bloomberg: What do you think of the craze OpenClaw has generated in China? Many people are even queuing up to experience it. Do you have any collaborations with Chinese companies?
Steinberger: At GTC, I had discussions with many companies, such as MiniMax, Kimi, and Tencent. I can actually understand this current "frenzy" because I've experienced similar moments myself.
A year ago, when I first tried programming agents, they had only about a 30% success rate, but getting even the smallest thing right would trigger a strong dopamine rush. At the same time, you could realize that this was going to completely transform the industry, and that this was their "worst time," with only the future to come. In that moment, I realized I could build almost anything because everything had become so much faster.
Now imagine you're not a tech person, but a small business owner, and suddenly discover: "It can read my emails, manage my schedule, write Google Docs, connect to my home devices, view WhatsApp messages, handle customer service requests..." You'll experience the same epiphany that engineers have had over the past year.
I even suffered from insomnia during that period because this change was so disruptive. I'm glad that I can bring AI closer to more people from different backgrounds.
Bloomberg: OpenAI's Codex has been growing rapidly recently. What are your thoughts on the combination of Codex and OpenClaw?
Steinberger: One of the core issues we are currently facing is how to make users understand that a product named "programming" is actually much more than just programming.
If you look at it from a longer-term perspective, all prompts become more powerful with programming capabilities. AI agents are smart enough to know their weaknesses and then compensate for them by writing code.
So, is the distinction between "what is a programming tool and what is not" still meaningful? This is also the conclusion we reached within OpenAI. In the future, this distinction will no longer be important, and therefore, they will ultimately need to be integrated into a single system.
Bloomberg: What if the agent could access all your files and run continuously?
Steinberger: It's really a question of "how to explain it to users." You can now connect to almost anything in the ChatGPT ecosystem, such as Slack, Google Docs, Notion, health data, and so on. But the current challenge is how to make users truly understand that these capabilities are already available.
Another challenge is that if you're working on an open-source project, you can move quickly because users are more forgiving and know it's a preview version and won't be used for working data. But once you're dealing with real-world working data, the situation is completely different, and it takes much longer to refine.
I look forward to being involved in solving these problems.
Bloomberg: What's the progress of the OpenClaw Foundation? Does OpenAI support it?
Steinberger: I'm trying to minimize OpenAI's involvement because this project needs to remain independent. The legal and organizational structure still needs a few weeks to finalize.
We already have some great partners, such as NVIDIA, and we're in discussions with Microsoft. ByteDance has already joined, and Tencent is also in the process of joining. I hope to maintain a kind of "Swiss-style neutrality."
Our goal is to spark greater interest in AI and encourage more people to start using it to think about problems. The most crucial aspect for the future is getting more people to spend more time understanding what AI can do, thus preparing society as a whole. This is the best way to ensure a bright future.
