00:00:00OpenClaw is becoming part of OpenAI and Peter Steinberger, the creator of OpenClaw,
00:00:05is joining OpenAI. And I think that's interesting from the perspective of a developer for a couple
00:00:12of reasons. It tells us something about where OpenAI is heading and has a couple of interesting
00:00:17implications, I would say. In a blog post Peter shared on his website, he mentions that his next
00:00:25mission is to build an agent that even his mom can use. Because of course with OpenClaw right now,
00:00:32you still need a certain level of technical knowledge or expertise in order to set it up
00:00:38and use it. And I would argue that's a good thing, because as I also argued in a video a couple of
00:00:45days or weeks ago, OpenClaw can be dangerous or has some serious security issues, which a lot of
00:00:53people even with technical expertise are either not getting or just ignoring. Or they're not doing as
00:01:00much useful stuff with OpenClaw as they share. Which is of course absolutely possible because,
00:01:06you know, it's the internet and 2026. Anyways, I shared my opinion on OpenClaw in that video,
00:01:12but I'll come back to it from time to time here. This mission of having an AI agent that's usable
00:01:19by everybody essentially, assuming that his mom is not a cracked developer, is obviously what all the
00:01:27AI companies, OpenAI and Thoughtbreak, they are all pursuing it. Because of course having us developers
00:01:34as customers with tools like Cloud Code, Codex and so on, which are all great, is nice, but the
00:01:41market out there is way bigger. And of course the end goal of these companies is to get to a point
00:01:47where AI agents do it all and, well, essentially we can replace humans. First in the digital space
00:01:56and then with robotics everywhere. Now, I won't get into whether that will happen anytime soon. I
00:02:04think I shared my opinion on this in many videos already. I don't even see developers being replaced
00:02:11anytime soon, leave alone humans in general. And I also don't believe that is a good goal. But yeah,
00:02:18that's a whole different story. What is interesting here though, is that Sam Altman shared that with
00:02:27Peter Steinberger joining OpenAI, they in the end expect that this OpenClaw will quickly become core
00:02:35to our product offerings. So an AI agent like OpenClaw. He also mentions that OpenClaw will
00:02:42live in a foundation as an open source project, so it will stay open source, that OpenAI will continue
00:02:48to support. The future is going to be extremely multi-agent, then it's important to us to support
00:02:54open source as part of that. What I'm reading from that statement pretty clearly is that OpenAI will
00:03:01build their own agentic offering on top of OpenClaw or using some of the learnings or even parts of the
00:03:09codebase of OpenClaw. So OpenClaw, yeah, will stay open source and they will continue to support it,
00:03:16maybe just as they will continue to stay open. I don't know. But they will definitely also build
00:03:22an agent on all those learnings from OpenClaw, which as I understand it will very likely not be
00:03:29open, but which will be a paid offering, which of course makes total sense from a business point of
00:03:34view. Now, in my video, I shared some concerns about security related to OpenClaw. And my biggest
00:03:44concerns are not about vulnerabilities in the codebase, which of course can be fixed over time
00:03:50and where of course having a company the size of OpenAI in your back can help a lot. That's not my
00:03:58biggest concern. My biggest concern with OpenClaw was and is prompt injections. That is my number
00:04:08one concern with all these agents. And that is a fundamental problem with large language models.
00:04:16Coding agents are great. Great assistants. They get lots of stuff wrong. They need steering. And
00:04:25whenever I hear people tell me that they can already write end-to-end software, I'm eager to
00:04:31see all that software out there, because that has not been my experience. But in the hands of a
00:04:37capable developer, those coding agents can indeed yield a significant productivity boost. That is my
00:04:44experience with them. That is what I think about coding agents. But important, they're far away
00:04:50from replacing developers. And of course, coding to some extent is the best task for these AI models
00:04:58and large language models because it's text in, text out. And a lot of relevant context, not all
00:05:04of it, but a lot of the relevant context is right there in the codebase for the model to see and use.
00:05:10And that of course is a huge difference compared to many other tasks in the world, in the digital
00:05:16world too. But no matter if we're talking about coding agents or agents like OpenClaw, prompt
00:05:22injections are a major problem. Now, a prompt injection can be subtle or unexpected. Of course,
00:05:30you could say you are writing the prompts with OpenClaw or with coding agents, but as soon as
00:05:37your coding agent goes visit a website or in the case of OpenClaw, reads an email, all that stuff,
00:05:44all that text it digests goes into a new prompt. And that is an attack vector right there for prompt
00:05:51injections. Now with coding agents, the advantage is that for one, you should run them in a sandbox,
00:05:58but second, they typically don't have broad access to your email account, to access to all your
00:06:05digital life. And you as a developer should be paying attention to what your coding agent does.
00:06:13You typically have to grant permission or deny permission. It asks you all the time if it's
00:06:18allowed to do that or do that. And of course you can run it in some dangerous mode. And if you put
00:06:24it in a sandbox, you might be safe enough that it's not going to delete your hard drive, but you are
00:06:32entering some dangerous territory there nonetheless when it comes to prompt injections, if you're not
00:06:39checking in on what your coding agent is doing at all. Still, the major difference compared to
00:06:45OpenClaw is that the coding agent does not have access to your entire digital life. Now,
00:06:49the idea with OpenClaw, of course, is to have that agent that even Peter's mom can use. And of course,
00:06:57the idea is that you have an agent that can do anything for you. You just texted, "Hey,
00:07:03please check my emails and draft some replies." Of course, it needs access to your email account then.
00:07:09If it has access to your email account, it could send emails. And I'm not talking about sending
00:07:14emails where it maybe starts insulting people, which would be bad enough, but it could exfiltrate
00:07:19data because guess what? It can also gather data. It can roam your system, collect data,
00:07:24and then send it through an email to some malicious bad actor due to a prompt injection, for example.
00:07:30That is the huge security issue I see, which I shared in this video, and which of course
00:07:35persists. And that's the important part, which will persist even with OpenAI behind the project.
00:07:42Now, is it something that can be solved? My understanding is with large language models,
00:07:48the way they work, not in general, but instead only with a concept that we've had for a long time,
00:07:54which is called zero trust. The idea is that you run your coding agent in an environment where you
00:08:02don't trust any of its actions and where you restrict what it can do, and you have to explicitly
00:08:10grant permissions for certain things you want to allow it to do. And for example, when it comes to
00:08:14sending emails, you might want to set up an environment where it has to ask you for every
00:08:20single email it wants to send and you as a human have to validate it and then approve or deny that
00:08:25request. Of course, such an environment, which is super restrictive and asks you all the time
00:08:33whether you want to grant a certain permission or not, is not really in line with that philosophy
00:08:39of having an amazing agent that can do it all on your behalf whilst you are in bad sleep.
00:08:45And that is why I don't see that agentic future right now, maybe not anytime soon,
00:08:55but we'll see what OpenAI can do there I guess. They also might just not care too much about
00:09:03security issues but then again they probably have to because their paying customers will likely not
00:09:10be too happy if data gets exfiltrated. Long story short, I'm still super skeptical even though I see
00:09:20the appeal of agents like OpenClaw. Having that helpful 24/7 assistant that can help you with
00:09:27digital stuff may sound pretty nice and I definitely see that this is a future we could be heading
00:09:35towards. And even though I don't yet have the use cases where it would be super useful because I don't
00:09:42want to give it those broad permissions and even with them I'm not sure if I would have that many
00:09:48use cases, even with all that in mind I totally get why OpenAI is super interested in this project.
00:09:55It also very well might mean though that OpenClaw is kind of going away over time, the hype might
00:10:03be dying down relatively soon also because of the image of OpenAI and of course because as I read
00:10:09this, this sounds very much like the typical corporate statement where they say "yeah yeah we're
00:10:14going to support this" and then at some point they stop supporting it, they don't care, they build
00:10:20their own product and then I guess we will see how good or bad that product is.