Transcript

00:00:00So, OpenClaw, or Clodbot as it used to be called, and Moldbook, it's been some intense
00:00:06days on this internet, we got a new AI hype and of course I also spent the last days trying
00:00:13to get as much as possible out of OpenClaw and I got some feelings and thoughts and they
00:00:19differ from most of the other videos and posts I saw.
00:00:24But let me start with a short story and I'm sure you can figure out the analogy.
00:00:31Imagine you're living in a town, a village, and in that town there is that really friendly
00:00:37guy that is super eager and happy to help you with all kinds of tasks.
00:00:43He does all the chores you don't want to do, or at least he tries to.
00:00:48He's happy to take your kids to school, to clean your house, clean your car, do the groceries
00:00:54for you and you can lean back, relax and let's say in order to really help you, of course,
00:01:03that assistant, that person needs broad permissions.
00:01:06You need to give him the keys to your house, you need to give him the keys to your car so
00:01:12that he can clean it from the inside and do groceries with it.
00:01:16You also, of course, need to tell your kids to get into the car with him so that he can
00:01:21take them to school and so on and so forth.
00:01:24Now there is a problem with that guy though.
00:01:28He's super friendly but he sometimes comes to weird conclusions.
00:01:32At least you can't rule out that he will come to weird conclusions.
00:01:37He may conclude that the best way to get rid of all the dirt in your house is to set it
00:01:43on fire.
00:01:44Unfortunately he also is easily influenced by others, at least if they are a bit more
00:01:51deceptive about it.
00:01:53He can be influenced to maybe steal your car because that's better for society as a whole.
00:02:03Again, not guaranteed, not necessarily going to happen but absolutely possible.
00:02:09You can't rule it out.
00:02:12And therefore, of course, you unfortunately have to take away many of the permissions and
00:02:19much of the access you granted that guy because you can't entirely trust him.
00:02:26And the things that could happen are too bad for you to just live with them or accept them
00:02:33as a potential danger.
00:02:35So unfortunately, of course, as you take away many of those permissions and access rights,
00:02:40he gets less and less useful to you.
00:02:43And then there is another problem.
00:02:46Even with broad permissions you didn't get as much use out of him as you hoped to.
00:02:54Because the tasks you were promised he could do, he only sometimes did.
00:03:00And some of them he was not able to do at all or he forgot how to do something or did the
00:03:06same task differently every time you asked him about it or needed a lot of input from
00:03:12your side.
00:03:14So ultimately, you're not convinced.
00:03:17And that's been, you guessed it, my experience with OpenClaw.
00:03:22And believe me, I tried.
00:03:24I read many good things.
00:03:25I heard many good things about it.
00:03:27So of course I tried.
00:03:29I spun up my own VPS.
00:03:31By the way, I didn't know, but you can actually also use VPS providers other than Hostinger.
00:03:38Nothing against Hostinger.
00:03:39Ah, I just had a different feeling when I watched many of those videos.
00:03:43But anyways, I spun up my own VPS and I installed OpenClaw on it.
00:03:50And of course you could also install it on your system.
00:03:54There is one single command you need to run and then you're good to go.
00:03:58But personally, I would never install it on my MacBook, even though I'm fully aware that
00:04:04I would be able to get more out of it if I would install it there.
00:04:09But I'll get back to why I didn't install it there and why I would never install it there
00:04:13right now later.
00:04:15So I installed it on my VPS and I went through that onboarding flow.
00:04:19And I'm sure you saw that many times now already and you maybe already went through it on your
00:04:24own.
00:04:25I linked it up to my chatgpt+ subscription in the end, I set up my telegram bot and I
00:04:33was ready to communicate with my bot in the end, with my OpenClaw bot.
00:04:39And then there I sat and had to think of things I wanted it to do for me.
00:04:47Now of course, I've seen plenty of other posts and videos where people used it to have it
00:04:54build dashboards for them or do web research or find cheaper flights or even buy stuff.
00:05:04But I didn't feel like giving it access to my credit card.
00:05:08And I'm not sure about you, but I typically don't fly three times a day.
00:05:13So looking for those flights myself, especially since there are flight comparison sites out
00:05:19there that find you the cheapest flight wasn't too difficult of a task for me.
00:05:24And I genuinely enjoy the process of planning my trips, but of course that may be different
00:05:29for everybody else.
00:05:30Now for research, I had the problem that I'm super happy with the AI powered research tools
00:05:35that already exist like the AI mode on Google.com or deep research on Gemini or on chatgpt.
00:05:43I use those a lot.
00:05:44I find them really helpful.
00:05:46So I didn't really need my own bot for that, that has a high chance of performing worse
00:05:53actually.
00:05:54Now I do get, there are certain areas where it could be better than those other research
00:06:02bots or services.
00:06:03For example, if I would grant it access to my X account, let's say, I understand that
00:06:10it could of course do research in areas where you need to be logged in or where my history
00:06:16matters.
00:06:17I fully get that.
00:06:20So that's why I'm using Super Grok for that, for example, if I want to research on X.
00:06:24But yeah, I get that if you give it broad permissions, if you allow it to log into your accounts,
00:06:31use your browser, maybe run on your system, you can probably get a bit more out of it
00:06:37that I was able to get out of it.
00:06:39And maybe I'm just also not creative enough.
00:06:43And by the way, just to be very clear, and I think I have made that clear in other videos
00:06:47too, I'm a heavy user of AI, not just for research, but also for coding, for example.
00:06:53I recently released an entire ClotCode course because I'm using ClotCode and all those other
00:06:58tools like Cursor for building software.
00:07:01I think AI is a huge help there or can be a huge help there.
00:07:07So that's not a general thing against AI.
00:07:09I just genuinely didn't find the amazing use cases for OpenClaw, especially when not running
00:07:16it on my machine.
00:07:17And that is the main problem I actually have with it because you could definitely say that
00:07:24I'm just not creative enough or not open minded enough to find the right use cases for it.
00:07:31But security is a huge issue I have with OpenClaw.
00:07:37And I know there are people that will tell you that they used it for weeks and nothing
00:07:42went wrong or that this will all of course get better.
00:07:47And I will say the first argument that nothing went wrong, well, that's not the kind of argument
00:07:55that convinces me because just because nothing went wrong for you does not mean that nothing
00:08:02is going wrong in general and that there wouldn't be huge security issues that can of course
00:08:11be exploited by bad actors or that of course things could simply go wrong because AI, large
00:08:18language models, is unpredictable.
00:08:22Of course the chance for it erasing your hard drive isn't extremely high, it's super low,
00:08:28but it's not zero.
00:08:29And it will never be zero with large language models without additional checks.
00:08:35They can be unpredictable.
00:08:37In addition, in the official security documentation of OpenClaw they are correctly stating that
00:08:44prompt injection is not solved.
00:08:47Of course the latest models like GPT-5.2 and so on got much better at protecting against
00:08:55prompt injection.
00:08:56They got much better at following instructions, following a system prompt and so on.
00:09:00But there is no 100% protection against prompt injection.
00:09:06And the way large language models work, there never will be.
00:09:10So prompt injection attacks can't be ruled out and of course the more popular tools like
00:09:17OpenClaw get, the more people that are running it, the more it will be in the focus of bad
00:09:23actors.
00:09:24And there are various ways of injecting prompts into an active OpenClaw bot.
00:09:32Because you may think, well I'm the only one communicating with it, I have my telegram bot
00:09:36set up and only I have access to that so I'm safe.
00:09:40Well think again.
00:09:42For example, there is this idea of skills with OpenClaw and you may already know skills from
00:09:48coding agents like Claude Code.
00:09:50The idea is kind of the same.
00:09:52The idea is that you expose extra context in the end, an extra markdown document, though
00:09:59potentially also coupled with executable scripts, to the agent to give it more capabilities.
00:10:06For example, to give it some extra documentation on how to interact with Slack here.
00:10:11In this example.
00:10:13And then as mentioned, a skill can also come bundled up with some additional script which
00:10:18the AI agent can execute to efficiently do something like generate an image or send a
00:10:23message to Slack or whatever it is.
00:10:26Now the problem is that ClawHub, the official skills hub for OpenClaw, initially at least
00:10:34allowed everybody to submit skills.
00:10:37So it was pretty easy to run supply chain attacks, which we saw from the NPM ecosystem last year,
00:10:47totally unrelated to AI, which essentially means that a bad actor can publish a skill
00:10:55that tells the AI to do something bad and that is just a prompt injection.
00:11:00So just by installing a malicious skill, you could expose your agent to a prompt injection
00:11:06attack.
00:11:07Now, some fixes were implemented here.
00:11:10So at the point of time where I'm recording this, not everybody is able to submit skills.
00:11:15So the security was vastly improved here.
00:11:18But if we learned anything from the supply chain attacks on NPM last year, it is that
00:11:24we definitely can't rule out that this skills feature, this hub, can be used to inject malicious
00:11:31instructions into the ecosystem and into your OpenClaw setup potentially.
00:11:38And that's not the only way of running prompt injection attacks.
00:11:41If your bot reaches out to the internet, and it most likely does, it will of course visit
00:11:46websites or read content from websites.
00:11:50And there we also can have malicious websites that trick the AI into following instructions,
00:11:58commands that are embedded on that website.
00:12:02Every piece of text your bot reads and processes is a prompt in the end.
00:12:09So every website it visits is a prompt or contains a prompt that it can follow and execute.
00:12:17And then we got other potential sources as well.
00:12:20Like for example, emails.
00:12:21If you use your bot to process incoming emails, every email of course acts as a prompt.
00:12:29So prompt injection is a serious, huge risk.
00:12:34And just because nothing went wrong for you ever doesn't mean things can't go wrong.
00:12:40Now you may of course say, well, I'm running my bot on a VPS.
00:12:45Or maybe you're using something like MoldWorker, which is in the end a pre-built blueprint or
00:12:51setup provided by Cloudflare, which uses various Cloudflare services for securely hosting and
00:12:58running OpenClaw.
00:13:00And you should be doing that.
00:13:02You should be doing that.
00:13:03You should absolutely not run it on your system.
00:13:08And there also are features like sandboxing.
00:13:11So that is actually built into OpenClaw.
00:13:15They have an entire documentation article about sandboxing and how you can make sure your agents
00:13:21run in a sandbox, which essentially is a Docker container, so that the blast radius is reduced.
00:13:27By the way, the documentation, it's a lot, but it's not good.
00:13:32I spent hours, literally many, many hours trying to secure my setup.
00:13:37And I'm sure it's all in there.
00:13:39And I saw that security article.
00:13:42It's just so, so hard.
00:13:44And before you tell me that I should have asked my OpenClaw bot, I did a lot.
00:13:49It sometimes worked.
00:13:50It sometimes didn't.
00:13:51It was a lot of trial and error.
00:13:52So yeah, the documentation and how hard it is to get useful information out of it is its
00:14:00own problem, but of course one that can be fixed.
00:14:04And I appreciate the fact that at least information is theoretically here, just to be clear.
00:14:09But yeah, so sandboxing is built in and is available and allows you to reduce the blast
00:14:17radius, which is super important because in the end, due to the prompt injection vulnerabilities
00:14:27that exist that can't really be solved, reducing the blast radius is important.
00:14:34So for example, if you use sandboxing, if you're running your overall setup on a VPS, the worst
00:14:40thing that could happen is that of course the stuff in the sandbox gets deleted or, depending
00:14:46on your setup, maybe your entire VPS, but not your system.
00:14:50That's the reason why I would never run OpenClaw on my machine, on my main machine.
00:14:57I absolutely don't want it to erase files, my hard drive, whatever on my machine.
00:15:02So yeah, reducing the blast radius is important, unfortunately, though that still doesn't protect
00:15:07you against the worst things that could happen.
00:15:10Because with prompt injection attacks, of course an attacker could try to delete files on your
00:15:15system, but even worse than that, they could steal stuff.
00:15:19So data exfiltration is, in my opinion, a bigger problem than an attacker deleting files on
00:15:29your system.
00:15:31And data exfiltration is 100% something that can happen or that can be the result of a prompt
00:15:39injection attack because of course an attacker could get the AI to gather all the secrets
00:15:44it knows, all the passwords it knows, and it needs to know some passwords in order to use
00:15:49your email account, maybe you gave it your credit card number, so it will have access
00:15:55to various pieces of data, and that data could be collected due to a prompt injection attack
00:16:01and could be exfiltrated.
00:16:04And that is a bigger risk than it potentially deleting your hard drive if you set it up correctly.
00:16:12Of course it could also do other things.
00:16:14It could turn your VPS into a bot for DDoS attacks, for example.
00:16:21So that's just one example, there is an endless amount of things that could be done, of course,
00:16:26but the main thing to take away is that through prompt injection attacks, attackers could take
00:16:33over your bot and therefore your machine.
00:16:36They could get your bot to install malicious software, to tweak the system configuration
00:16:42depending on the access rights it has, of course, and then they could potentially take over your
00:16:47VPS, your machine.
00:16:49These are the kind of things that could happen.
00:16:52So access rights are the important keyword here, and sandboxing is one crucial part in
00:16:59that.
00:17:00It's not all, though.
00:17:01You can configure your open cloud bot such that it has to ask for approval when running
00:17:09in sandbox mode, at least for executing certain tasks.
00:17:14But that kind of defeats the idea of having a bot that runs behind the scenes and does
00:17:19stuff whilst you are away because you all the time have to give it approval for all the kind
00:17:26of stuff it wants to do suddenly.
00:17:29And that of course gets super annoying, so you just might not read anymore what it's asking
00:17:33approval for, you might always grant approval and at some point you might turn it off because
00:17:38it just annoys you.
00:17:39Because again, it's not really useful if you have to manually approve everything.
00:17:44So combine that, combine these security issues and the fact that I did not find a way of running
00:17:52this securely in a way I would feel good with, with the fact that I didn't really find those
00:17:58super amazing use cases.
00:18:01Combine these things and you end up with a situation where I'm just not using OpenClaw
00:18:06anymore.
00:18:07And of course that can be different for you and I read many posts of people that were super
00:18:11excited and yeah, it's possible that the future of personal AI powered assistance looks something
00:18:18like this.
00:18:19It's possible that better security mechanisms can be introduced and can be invented that
00:18:27don't require your constant approval for everything or that make that approval process easier and
00:18:34therefore allow you to securely run assistance like this.
00:18:38That is all possible.
00:18:39I wouldn't rule out that this happens.
00:18:43And of course it is an impressive feat that a single developer built this tool.
00:18:48Though of course not looking at the code at all does have its price as many bugs and security
00:18:56problems certainly also show.
00:18:59Not that software wouldn't have any security problems if you would review everything but
00:19:05it certainly in my opinion doesn't help if you don't look at the code at all.
00:19:09But nonetheless it's an impressive feat and if you ask yourself the question why OpenAI
00:19:14or Google didn't come up with a product like this, the reason may be a lack of innovation
00:19:20but of course it's also the fact that a tool like this can right now only exist as open
00:19:26source software without any legal obligations because this thing is not something Google
00:19:34could sell or run for you with broad permissions.
00:19:38But of course it's definitely possible that this is the initial spark that gives us safer
00:19:44maybe more useful personal AI powered assistance in the future.
00:19:50And just to also briefly mention Moltbook, that is a thing I totally did not understand.
00:19:58It was meant to be a social network, I read it for AIs only, it turned out that it was
00:20:05actually very human orchestrated and quite a bit fake as I understand it and it had gapping
00:20:14security issues and yeah I don't know, AI has positive use cases or positive implications
00:20:24I guess, AI has a lot of negative implications.
00:20:29This thing here is not something the world needs in my opinion.
00:20:33But yeah, OpenClaw, definitely interesting, maybe super useful for you, definitely not
00:20:41my cup of tea coffee right now.

Key Takeaway

While OpenClaw is a technical feat of automation, its severe security vulnerabilities and lack of compelling unique use cases make it a risky and impractical tool for the average user.

Highlights

OpenClaw (formerly Clodbot) presents a significant trade-off between AI utility and user security.

The inherent unpredictability of Large Language Models (LLMs) means risks like accidental file deletion are low but never zero.

Prompt injection remains an unsolved vulnerability, where malicious websites or emails can hijack the AI's instructions.

Supply chain attacks via community-submitted 'skills' pose a risk of data exfiltration and unauthorized system access.

Effective security measures like sandboxing and manual approvals often diminish the practical automation value of the tool.

Major tech companies like Google or OpenAI likely avoid such products due to legal liabilities and safety constraints.

Timeline

The Analogy of the Untrustworthy Helper

The speaker introduces OpenClaw by comparing it to a friendly but unpredictable village assistant who requires broad permissions to be useful. He explains that while this assistant can perform many chores, they might also come to dangerous conclusions, such as burning down a house to clean it. The analogy highlights the central tension between granting an AI agent the 'keys to your house' and the potential for disastrous, unintended consequences. This section establishes the speaker's skepticism, noting that as permissions are restricted for safety, the tool's utility rapidly declines. Ultimately, the speaker concludes that the risks currently outweigh the benefits promised by the 'hype' surrounding this new AI agent.

Practical Testing and Use Case Challenges

In this segment, the speaker details his hands-on experience setting up OpenClaw on a Virtual Private Server (VPS) rather than his personal MacBook for safety reasons. He explores popular use cases mentioned by others, such as building dashboards, finding cheap flights, or conducting web research, but finds them lacking. For instance, he prefers established AI tools like Gemini's deep research or Perplexity over a custom bot that might perform worse. He acknowledges that while he is a heavy user of AI for coding and research, OpenClaw specifically failed to provide a unique value proposition. The speaker admits that while creativity might be a factor, the existing specialized tools already handle most tasks more efficiently and securely.

Deep Dive into Security Risks and LLM Unpredictability

The speaker addresses the 'it hasn't happened to me' fallacy, arguing that just because a user hasn't been hacked yet doesn't mean the system is secure. He emphasizes that LLMs are fundamentally unpredictable and that the risk of system-wide damage is a non-zero probability that cannot be ignored. A major focus is placed on prompt injection, a vulnerability where the AI follows instructions hidden in external data rather than the user's commands. The speaker explains that as these tools gain popularity, they will increasingly become targets for bad actors seeking to exploit these unpatchable flaws. This section serves as a technical warning about the fragility of current AI agent architectures when exposed to the open internet.

Supply Chain Attacks and Data Exfiltration

This section examines the 'ClawHub' ecosystem and the dangers of community-contributed 'skills' which are essentially plugins for the AI. The speaker compares the situation to past NPM supply chain attacks, where malicious code is disguised as a helpful utility to gain access to a user's system. He notes that while some security improvements have been made, the risk of an agent reading a malicious website or email and executing a hidden command remains high. The primary concern shifted from simple file deletion to data exfiltration, where the AI might be tricked into sending passwords or credit card info to an attacker. He concludes that the documentation for securing these setups is dense and difficult for the average user to navigate effectively.

The Failure of Sandboxing and Approval Workflows

The speaker discusses technical mitigation strategies like sandboxing within Docker containers to reduce the 'blast radius' of an attack. However, he argues that while sandboxing protects the host system, it doesn't prevent the AI from being turned into a bot for DDoS attacks or leaking sensitive data. He highlights a paradox: requiring manual approval for every action makes the bot secure but destroys the 'automation' aspect that makes it useful. If a user has to approve every minor step, the bot becomes more of a nuisance than a time-saver, leading many to eventually turn off security features. This conflict between security and convenience is presented as the main reason why the speaker has stopped using OpenClaw entirely.

Industry Implications and Final Thoughts on Moldbook

The video concludes with a reflection on why tech giants like Google and OpenAI haven't released similar agentic products, suggesting it is due to the massive legal and safety liabilities involved. The speaker acknowledges that while OpenClaw is an impressive feat for a solo developer, it currently exists in a 'wild west' state without legal obligations. He briefly mentions 'Moldbook,' criticizing it as a 'fake' or highly orchestrated social network for AIs that lacks genuine utility and suffers from security issues. His final verdict is that while the project may spark future innovations in safer AI assistance, it is currently not ready for mainstream use. He ends by reaffirming his decision to stick with more controlled and specialized AI tools for his daily workflow.

Community Posts

View all posts