Hackers are exploiting insecure AI agents called OpenClaw, compromising more than 28,000 systems worldwide. SecurityScorecard’s analysis reveals that these deployments expose thousands of high-risk systems directly to the internet, with minimal protective measures in place.
The report identified a total of 40,214 internet-exposed OpenClaw instances, with 28,663 unique IP addresses hosting control panels accessible globally. Approximately 63% of these deployments are vulnerable to remote code execution, which enables attackers to seize control of host machines without user interaction.
Among the vulnerabilities, three high-severity Common Vulnerabilities and Exposures (CVEs) were noted, with CVSS scores ranging from 7.8 to 8.8. Public exploit code for each vulnerability is readily available, heightening the risk for unprotected systems.
Stay Ahead of the Curve!
Don’t miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.
Subscribe Now
The findings show that 549 exposed instances correlate with previous breach activity, while 1,493 are linked to known vulnerabilities. Many exposed deployments occur within major cloud and hosting providers, highlighting repeated patterns of insecure setups.
OpenClaw, previously known as Moltbot and Clawdbot, functions as a personal AI agent, managing tasks and communications for users. The issue stems from excessive permissions granted to these systems without adequate security measures.
Jeremy Turner, VP of Threat Intelligence at SecurityScorecard, stated, “In practice, because it was written by AI, security wasn’t a dominating feature in the development process.” He emphasized the importance of careful consideration regarding integrations and permissions assigned to these AI agents.
The report also found that users commonly configure the bots with identifiable personal or company names, making them attractive targets for cybercriminals. Connecting an AI agent to a platform provides that agent with specific permissions, including the ability to access emails or post content.
Turner explained, “The risk isn’t that these systems are thinking for themselves. It’s that we’re giving them access to everything.” He likened this to handing a laptop to a stranger and expecting no negative consequences.
Consequences of compromising an agent could include unauthorized fund transfers or the sending of malicious messages, as the behaviors appear legitimate. The ongoing imbalance between rapid AI adoption and insufficient security measures has led to data exposure and loss of control among users.
OpenClaw has raised concerns, prompting Microsoft to advise against its use on standard devices. Additionally, Chinese authorities have restricted OpenClaw in office environments due to significant security risks.
Some vulnerabilities allow hackers to access sensitive information and have facilitated malware distribution via GitHub. Turner urged caution, advising users not to deploy AI agents indiscriminately. “Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do,” he said.
Featured image credit
