Recent research shows that OpenClaw’s skill-scanning system is not a secure boundary. Posting third-party skills remains a problem for AI agent creation and usage.
OpenClaw skills still pose security threats, and the recent skill-scanning system is not a secure boundary, according to recent security expert research. Skill scanning has been proposed as a gateway for skill publishers, aiming to intercept potentially malicious data payloads or malicious elements of the skill itself.
As Cryptopolitan reported, third-party services have already posed security risks, and AI agent adoption is accelerating and worsening the problem.
OpenClaw allows the user to create agents and run them on a local machine or a server. However, skills immediately alongside OpenClaw, and may inherit the same access to resources and tools. Since some skills involve sensitive tasks such as wallet access or on-chain interactions, the skill sets posted by third parties remain a risk.
Recent research showed Clawhub uses VirusTotal, as well as OpenClaw’s internal moderation system. The results of those checks classify the skills and set up user warnings during installation.
This system is still imperfect and may deem harmless or even potentially harmful skills. A problem arises when VirusTotal flags the skill as suspicious, and OpenClaw as benign. The user is shown a warning, and may still confirm the skill installation. Skills fully flagged as malicious are not allowed for downloads.
OpenClaw also offers sandboxing and runtime controls, but these are optional and do not constitute a hard default boundary for third-party skills. OpenClaw leaves Docker-based sandboxing optional, and some tools remain available with it switched off.
Users also choose the direct path because sandbox environments can be difficult to deploy, and some skills break down. This also means that the platform depends on reviews and warnings, a system that is not directly protective when running agent skills.
OpenClaw has already implemented some security measures, including checks for behaviors specifically linked to catch code that can read secrets and send them out. This approach is used in traditional security to detect suspicious processes, requests, and other behaviors.
AI agent skills are harder to scan because the inputs involve both code and natural-language instructions, as well as runtime behavior. Traditional security may have blind spots for agentic behaviors.
The next layer is to use AI scanning to catch more risky behaviors that weren’t caught by a static search or the usual regular expression approach. AI agents can give a glimpse into the internal consistency of skills, while not being exhaustive of the potential for exploits. They search for the most obvious exploitable code or general inconsistencies.
Researchers noted the OpenClaw checks and moderation system was fast to approve skills, while VirusTotal sometimes took days to flag the addition. It was also possible to add exploits to already approved skills. This meant that the OpenClaw process could proclaim skills were benign when they could contain unexpected behaviors.
For AI agent developers, researchers recommend sandboxing or using tools to prevent skills from running, even if they are flagged as benign. The researchers called for skill platforms to assume that normal-looking skills may hide exploits and to avoid using them in high-value environments, potentially granting access to crypto wallets or other sensitive information.
Your bank is using your money. You’re getting the scraps. Watch our free video on becoming your own bank


