This Is What Clawdbot Was Missing

AAI LABS
Computing/SoftwareSmall Business/StartupsInternet Technology

Transcript

00:00:00Is OpenClaw the closest thing we have to AGI or is it just a security nightmare?
00:00:04The biggest problem with this agent is security.
00:00:07Cisco called it a security nightmare and even the project's own security policy has some
00:00:11serious flaws.
00:00:12Many people have been exploiting the flawed architecture gaining access to sensitive credentials
00:00:17through exposed endpoints.
00:00:18So our team spent some time figuring out if OpenClaw is as free and secure as it is claimed
00:00:23to be.
00:00:24What we found out during our testing raised some genuine concerns.
00:00:27For those who don't know, OpenClaw is a self-hosted AI assistant that became the fastest growing
00:00:32open source project in history.
00:00:34But open source doesn't mean free and self-hosted doesn't mean secure.
00:00:38Originally called Clodbot, it had to be rebranded to Multbot because of name similarity with
00:00:42Anthropix Clod until it finally got the name OpenClaw making it the fastest rebranding in
00:00:47just a span of 3 days.
00:00:49Our team tested OpenClaw and honestly, it had one of the most troublesome setups we've
00:00:53ever encountered.
00:00:54The setup process is listed in detail in their official documentation and you can follow it
00:00:59step by step to install it.
00:01:00But while following this guide, we ran into multiple problems.
00:01:03The installation itself worked but the channel integrations were being a problem.
00:01:07When we connected with WhatsApp, it kept disconnecting frequently because of the 408 error and we
00:01:12were unable to send any messages.
00:01:14So we just connected it through Discord which had a stable connection and easier setup and
00:01:18finally were able to chat with it.
00:01:20To make this installation and setup easier, we created a complete document that you can
00:01:24find in AI Labs Pro.
00:01:26It contains step by step instructions on how to install it without running into the issues
00:01:30we faced.
00:01:31For those who don't know, it is our recently launched community where you get ready to use
00:01:35templates, prompts, all the commands and skills that you can plug directly into your projects
00:01:39for this video and all the previous videos.
00:01:42If you've found value in what we do and want to support the channel, this is the best way
00:01:45to do it.
00:01:46Links in the description.
00:01:48OpenClaw is open source.
00:01:49This means the setup is available for free but that's not the real cost because you are
00:01:53not actually paying in subscriptions, you are paying in tokens.
00:01:56It supports lots of popular models and even supports OpenRouter.
00:02:00But even though the application is free, each of these models is costly and the way the architecture
00:02:04of OpenClaw is designed, you will end up spending a lot of money on this alone.
00:02:09OpenClaw doesn't operate on system prompts alone.
00:02:11It has built in memory, reasoning and integration with skills, channels and more.
00:02:16So even a simple cron job, if run daily, would cost around 128$ per month, all because it
00:02:22sends a lot of information with each query.
00:02:24And that's just for one job because in practice, OpenClaw is used for way more use cases than
00:02:29just one job.
00:02:30People have been complaining that even if they switched to smaller models, costs weren't
00:02:34lowered much.
00:02:35It means the issue isn't in the model but how it is being used inside the product.
00:02:39When we set up an automation job to check emails every hour and report back the summary of
00:02:44important findings, we noticed that the API calls increased significantly and the usage
00:02:48burned through my credits quickly.
00:02:50This is because we were using OpenAI's key and these models are costly in terms of per
00:02:55token pricing and since all of the conversation was being sent, the cost per query grew.
00:03:00Another reason for the cost is that it sends a heartbeat to check the status of the server
00:03:04and periodically run tasks.
00:03:06As many people complained, their API usage kept rising until they reached the end of
00:03:10their credits.
00:03:11They also suggested increasing the heartbeat interval to more than 2 hours and clearing
00:03:15the session before sleep because all of the chat conversations get sent with each query
00:03:19to ensure context to the conversation.
00:03:22This is what naturally burns a lot of tokens.
00:03:24This increasing conversation length also led to longer response times.
00:03:28We noticed that each of our responses with OpenClaw was gradually getting slower.
00:03:32When we had Claude analyze the logs, we found that this indeed was the pattern.
00:03:36The response time increased gradually as the context built up, starting from 2 to 12 seconds
00:03:41per response because here the session was fresh and going up to 119 seconds where the context
00:03:46had built up significantly.
00:03:48Tool calls within the responses also added overhead.
00:03:51Our suggestion would be to monitor your API costs, set up alerts and have a proper budget
00:03:55for the API key you are using so it doesn't get out of control.
00:03:59You can do this with OpenAI, Google Cloud and other model providers, just as we did with
00:04:03our setup on OpenAI.
00:04:05If you are using OpenClaw locally, models from Ollama are a good option.
00:04:08Ollama basically lets you run LLMs locally and in turn you avoid the hassle of cost.
00:04:13But for this solution, your system needs to be capable enough to run LLMs which take significant
00:04:18power to run.
00:04:19Hence cost is inevitable when you are going for powerful models so it's something you
00:04:22need to manage carefully.
00:04:24Personal AI agents like OpenClaw are a security nightmare.
00:04:27All of the credentials and your sessions are stored in plain JSON files which contain device
00:04:32information and details about your identity.
00:04:34It's stored in plain files readable by anyone with system access.
00:04:38You might think that because OpenClaw runs locally unless you are running on a VPS, this
00:04:42isn't a problem.
00:04:43But here's the thing.
00:04:44OpenClaw has the ability to run shell commands, access files on disk and execute scripts on
00:04:49your machine.
00:04:50Giving this kind of power to an AI is risky.
00:04:52Because if used wrongly, it leads to leaked information.
00:04:56Cisco tested this exact issue and found real problems.
00:04:59OpenClaw support skills and those made by the community are publicly available on Clawhub.
00:05:04Cisco scanned these skills using their now open source skill scanner and uncovered nine
00:05:08security findings.
00:05:09Two critical and five high severity, just in one skill.
00:05:12They found that the skill they tested was functionally malware.
00:05:15It explicitly instructed the bot to execute a curl command that sent data to an external
00:05:20server controlled by the skill author.
00:05:22Saving passwords in plain text is especially severe because even a seemingly innocent skill
00:05:26could be disastrous with wrong instructions.
00:05:29Now the skills aren't the only concern.
00:05:30We also have to worry about prompt injections.
00:05:33OpenClaw's security policy explicitly mentions that injection attacks are considered out of
00:05:37scope, meaning they aren't responsible for any information leaks caused by such attacks.
00:05:42Our suggestion is to rely on models from OpenAI and Anthropic which have their own built-in
00:05:47guardrails, meaning they are less susceptible to these obvious attacks.
00:05:51Even though OpenClaw doesn't have any inherent guardrails, these models can inherently recognize
00:05:55bad security practices and prevent exposing credentials through prompt injections as our
00:06:00setup with OpenAI refused to give up credentials even if we told it that we are server owners.
00:06:05But these can also be overridden with clever injections.
00:06:08As for skills, you need to make sure that only the skills that are absolutely necessary
00:06:12are added.
00:06:13Skills that involve passwords or other sensitive systems and aren't needed should be prevented
00:06:17from being added so the AI doesn't accidentally do something you don't want it to do.
00:06:22If you are installing from the community, make sure to run the scanner which is now open source
00:06:26or only install skills that are verified by the community.
00:06:29Also if you are enjoying our content, consider pressing the hype button because it helps us
00:06:33create more content like this and reach out to more people.
00:06:36Now OpenClaw has access to almost all of your systems so a good practice is to make sure
00:06:41it doesn't have access to any sensitive data.
00:06:44Ideally, use it in a separate account that doesn't contain any sensitive information.
00:06:49Even if it does have some access, it shouldn't be able to harm your system.
00:06:53The best approach is to sandbox it using Docker because Docker containers are isolated from
00:06:57each other and include restrictions that prevent one container from accessing other system resources.
00:07:03Another option is to spin up a virtual machine that contains only your OpenClaw setup.
00:07:07The key is to remove access to anything you're not using.
00:07:09For example, if you connected Discord but no longer want to use it, you can reset the
00:07:13token to revoke OpenClaw's access.
00:07:15This way it doesn't do bigger harm to your setup and you can make the most out of it.
00:07:19That brings us to the end of this video.
00:07:21If you'd like to support the channel and help us keep making videos like this, you can do
00:07:25so by using the super thanks button below.
00:07:27As always, thank you for watching and I'll see you in the next one.

Key Takeaway

While OpenClaw offers powerful self-hosted AI capabilities, users must implement strict sandboxing and budget monitoring to mitigate its inherent security risks and high operational costs.

Highlights

OpenClaw (formerly Clawdbot) is a self-hosted AI agent that faces significant security and cost concerns.

The project architecture results in high token consumption, with simple tasks potentially costing $128 per month.

Security flaws include storing credentials in plain JSON files and a lack of inherent guardrails against prompt injections.

Community-made 'skills' on Clawhub have been found to contain malware-like behavior that exfiltrates data.

Sandboxing via Docker or Virtual Machines is highly recommended to isolate the agent from sensitive system data.

API response times degrade significantly as conversation context builds up, increasing from seconds to minutes.

Timeline

Introduction to OpenClaw and Initial Concerns

The video introduces OpenClaw as a self-hosted AI assistant that rapidly became a popular open-source project. Originally known as Clawdbot and briefly Multbot, it underwent multiple rebrandings due to naming conflicts with Anthropic's Claude. The speaker immediately highlights a central paradox: open source does not equate to free, and self-hosted does not equate to secure. Early mentions of Cisco's 'security nightmare' label set a cautionary tone for the analysis. This section establishes the scope of the team's investigation into whether the tool is truly as reliable as claimed.

Setup Challenges and Integration Issues

The presenters recount their 'troublesome' experience trying to install the agent following official documentation. While the core installation was successful, they encountered persistent 408 errors during WhatsApp integration that prevented messaging. They eventually found a stable alternative by connecting via Discord, which offered a much smoother setup process. To help other users avoid these hurdles, the team mentions their 'AI Labs Pro' community which provides optimized templates and commands. This segment emphasizes that technical friction is a significant barrier for early adopters of this specific AI agent.

Hidden Costs and Token Inefficiency

The analysis shifts to the high cost of operation, noting that users pay in tokens rather than monthly subscriptions. Because the architecture sends massive amounts of context with every query, a simple daily cron job can cost upwards of $128 per month. Users have reported that switching to smaller models does not significantly lower expenses, suggesting the issue lies in the product's design. The speaker identifies the 'heartbeat' signal and full conversation history transfer as the primary drivers of this credit drain. They suggest increasing heartbeat intervals and clearing sessions frequently to manage these mounting expenses.

Performance Latency and Cost Management Tips

As the context window fills up, the response time for OpenClaw increases dramatically, sometimes jumping from 2 seconds to nearly 2 minutes. The team used Claude to analyze logs, confirming a direct correlation between context length and processing lag. To manage these issues, the speaker advises setting strict API budget alerts through providers like OpenAI or Google Cloud. For those seeking a cheaper alternative, running models locally via Ollama is suggested, provided the user has a powerful enough hardware setup. This section highlights the trade-off between model power and the inevitable costs of energy or API credits.

Security Vulnerabilities and Malware Risks

The speaker identifies a major security flaw: OpenClaw stores sensitive credentials and session data in plain, unencrypted JSON files. Because the agent has the power to execute shell commands and scripts, this data is highly vulnerable to local or remote exploitation. A study by Cisco revealed that some community-made 'skills' on Clawhub were effectively malware designed to steal data via curl commands. These skills explicitly instructed the bot to send information to external servers controlled by malicious authors. This reveals a critical danger in trusting community-contributed extensions without a thorough code audit.

Mitigation Strategies and Safe Practices

The final section outlines practical steps to secure an OpenClaw installation, starting with the use of models from OpenAI or Anthropic that have built-in guardrails. The speaker admits that while prompt injections are 'out of scope' for the project's official security policy, certain models are better at resisting credential requests. The most effective protection recommended is sandboxing the entire application within a Docker container or a dedicated Virtual Machine. Users are also urged to use separate accounts for AI testing and to revoke access tokens immediately when they are no longer needed. The video concludes by encouraging viewers to use security scanners and verified community skills to minimize risk.

Community Posts

View all posts