Threat Research
Security Case Studies
Real-world examples of how Sigil's specialized AI security scanning helps identify threats that target the AI ecosystem.
The OpenClaw Campaign
In February 2026, the OpenClaw campaign published 314 malicious AI skills using advanced evasion techniques. This case study examines the attack patterns and demonstrates how specialized AI security scanning can identify threats that target the AI ecosystem. The campaign used techniques including prompt injection in skill documentation, password-protected archive delivery, and HTTP-based executable downloads to deploy credential-stealing malware.
Attack Vector
AI skill marketplace
Scope
314 malicious skills
Techniques Used
Prompt injection, password-protected payloads, markdown-based RCE, social engineering
Detection Methods
Install hook analysis, prompt injection scanning, publisher behavior analysis, binary detection
Attack Chain
Attacker creates marketplace account ("hightower6eu")
314 skills published with generic names (crypto, finance tools)
SKILL.md files embed malicious download instructions
HTTP downloads of executables (MITM vulnerability)
Password-protected archives, base64 encoding
Atomic Stealer (AMOS) deployed for credential harvesting
Sigil's Specialized Coverage
Sigil is designed to complement existing security tools by adding AI-specific threat detection.
More Case Studies
Shai-Hulud npm Worm
A self-propagating worm targeting the npm ecosystem through malicious postinstall hooks. The worm modified infected packages to include its own install hook, creating a chain reaction across the dependency tree. Sigil's Phase 1 (Install Hooks) and Phase 3 (Network/Exfil) detection patterns are designed to identify this class of threat.
Scope
2.6B+ weekly downloads affected
Technique
Self-propagating install hooks
Detection
Install hook detection + network exfiltration patterns
MUT-8694 Cross-Ecosystem Attack
The first coordinated attack spanning both npm and PyPI ecosystems simultaneously. MUT-8694 used provenance metadata abuse to deliver malicious binaries, exploiting trust in established package registries. Sigil's Phase 6 (Provenance) scanning is designed to detect suspicious binary files and anomalous package metadata.
Scope
First coordinated multi-ecosystem campaign
Technique
Binary delivery via provenance abuse
Detection
Provenance analysis + binary file detection
Hugging Face Model Poisoning
Over 100 machine learning models on Hugging Face were found to contain malicious pickle payloads that established reverse shells when loaded. This attack exploited Python's pickle deserialization to execute arbitrary code. Sigil's Phase 2 (Code Patterns) detection for pickle.loads() and Phase 3 (Network/Exfil) patterns help identify this attack vector.
Scope
100+ poisoned models
Technique
Pickle exploit for reverse shells
Detection
Code execution patterns + network exfiltration