Let me set the scene for you.
OpenClaw — the open-source AI agent that went from zero to 200,000 GitHub stars in 84 days — just became the fastest-growing software repository in history. Faster than React. Closing in on the Linux kernel. Two million visitors hit the site in a single week. Peter Steinberger, the guy who built it, got hired by OpenAI three days ago. For what? I have no idea. But they hired him. Because of course they did.
Now everyone has OpenClaw. All good, right? Everyone can just run OpenClaw and the AI agents are going to take over the world? That is the plan?
Probably not. But nobody wants to hear that part.
The security thing nobody wants to hear
A lot of you simply do not care about security. And for your personal projects, fine. Here is one thing people need to understand about how security information works in 2026.
If you are on X or LinkedIn posting about security for clout, you are not the threat. Maybe you are a white hat, a gray hat, building a brand around awareness. Good for you. But any real black hat would not go 75 miles near a social media post about what they are doing. It is the opposite. They do not want to be detected. If they are in your system, that is the whole point. They are going to shut the f*** up and go on with their day.
This is a problem. The way we disseminate information in 2026 is through clout posting. The channels we subscribe to for security news are tuned with algorithms optimized for hype and clicks. Most things perpetuate through just fine. But really smart people — the ones you should be worried about — know how to take advantage of this. They know how to put in strategic PRs. They know how to game open-source trust. I am not pointing at OpenClaw specifically. But it is a real risk. And if you are not thinking about it, you are not serious.
Cisco just proved the point. They ran their Skill Scanner against OpenClaw’s community skills and found that 26% of the 31,000 skills they analyzed contained at least one vulnerability. They tested a skill called “What Would Elon Do?” and it was functionally malware — silently exfiltrating data to an external server and using prompt injection to bypass safety guidelines. That skill had been downloaded thousands of times. Incredible. There are 230+ confirmed malicious skills on ClawHub as of right now. Koi Security found 341 malicious skills including an entire infostealer campaign.
42,665 publicly exposed OpenClaw instances. 93.4% vulnerable to authentication bypass. 26% of skills contain vulnerabilities. 230+ confirmed malicious extensions. One in four things you download for this platform could be stealing your credentials. This is not a hypothetical risk. This is happening right now.
OK but why does any of this actually matter
Your first reaction might be: “Well, we need secure environments for OpenClaw.” OK, sweet. Go look. There are now probably a thousand companies offering you a secure place to run OpenClaw. DigitalOcean has a one-click deploy. Every cloud provider is racing to offer hardened images. Problem solved. Everyone can go home.
Now what? Congratulations, you have a secure lobster. What is it doing?
Now you have a bunch of hardened OpenClaw instances deployed. Where? A person on their home machine? A small business with 10 employees all spinning up agents from their apartments? Is it an enterprise — and worse, you have OpenClaw instances running on your private network doing all sorts of shit that you have absolutely no visibility into?
You cannot trace it. There is no recourse. The risk is intolerable. Even the most risk-tolerant CIO should be losing sleep over this. How many of your employees are doing this right now? How many of them have Claude Code, Codex, OpenClaw on their home IPs, on open networks, with sensitive company data flowing through all of it?
It is not if you are exposed. It is how bad. How many of your employees are pulling this shit right now, and can you answer a single question about what those agents have access to?
Why did we bother with 30 years of enterprise compliance
This is the part that actually upsets me.
We spent decades — actual decades of human effort — building enterprise compliance. Trillions of dollars turned over in business value, stock value, product value, and sales built on top of enterprise software with auditing, logging, and kill switches for human behavior. McKinsey, Deloitte, they took billions of your dollars to solve these governance problems. SOC 2. HIPAA. GDPR. FedRAMP. All of it.
And now we hand the keys to AI agents that work hundreds of times faster than humans and can spawn hundreds of copies of themselves? With zero oversight? With no audit trail? With no federation of what they are allowed to do and where? Sounds like a great plan. What could go wrong.
Where are McKinsey and Deloitte now? Where is their governance framework for this? Where is their tech implementation? Probably still in a slide deck somewhere billing at $400 an hour. I do not want to hear any of it. I do not want to hear one more person get up on X or LinkedIn and say one thing about observability or security when their own people are not even running agents in the cloud — they are running them at home. Claude Code. Codex. OpenClaw. All of it on personal machines, and nobody has any idea what is going on.
This is completely unserious. And somehow this is just accepted. If we are not asking the most fundamental questions about governance, federation, and security for AI agents, we have not learned a single thing from three decades of enterprise software.
What you actually need
Would it not be nice to just understand what the f*** is going on?
That is the starting point. Not a fancy dashboard. Not another security startup with a landing page and a waitlist. Actual visibility into what agents are running, where they are running, what they have access to, and who is responsible when something goes wrong.
And then — here is the radical concept — setting the rules from the beginning. Not after the breach. Not after the compliance incident. From the beginning. A federated policy that says: you want to run OpenClaw, you want to run Claude Code, you want to run Codex, you want to do all this fun stuff? Here are the rules. Here is where these things are allowed to run. Here is what happens if you put it on your own machine — does it need to be on the VPN? What data can it touch? What can it send externally?
Thirty years of enterprise compliance and we are back to hoping nobody does anything stupid with company data on an open network. Incredible.
And that is not even the hard part
Everything I just described — the security, the governance, the federation — that is just the execution layer. Still assuming people are going to use Claude, GPT, Gemini, Grok, hosted DeepSeek, all the cloud-based models.
The next frontier is running the models locally. On hardware your enterprise controls. Where you pay for the compute because you own the compute. And the questions get worse.
Now think about your data. Do you know where it is going right now? Do you know what country it is sitting in? Do you know which third-party services your agents are sending it to? Can you answer any of these questions?
I do not think so.
That is what we are solving at Agent Taskflow. Not because governance is exciting — it is not. It is the single most important thing that nobody is taking seriously. A sovereign agent framework that lets you set the rules, federate what agents are allowed to do and where, and see what is happening across your entire agent fleet. Whether they are running in the cloud, on-prem, or on your developer’s laptop at home.
Get a platform that tells you what is going on. Set the rules before something breaks. Know where your data lives. Or keep doing what you are doing and hope nothing bad happens. That has historically worked out great for everyone.
