Summary
prompt-armor v0.5.0 was classified as CRITICAL RISK with a risk score of 90. Sigil detected 14 findings across 119 files, covering phases including provenance, code patterns, network exfiltration, credential access, obfuscation. Review the findings below before installing this package.
Package description: Open-core LLM prompt security analysis — detect prompt injections, jailbreaks, and other attacks
v0.5.0
22 March 2026, 11:22 UTC
by Sigil Bot
Risk Score
90
Findings
14
Files Scanned
119
Provenance
Findings by Phase
Phase Ordering
Phases are ordered by criticality, with the most dangerous at the top. Click any phase header to expand or collapse its findings. Critical phases are expanded by default.
Badge
Markdown
[](https://sigilsec.ai/scans/DD5A1654-C54D-4DB0-B8D1-6D211FD52A0E)HTML
<a href="https://sigilsec.ai/scans/DD5A1654-C54D-4DB0-B8D1-6D211FD52A0E"><img src="https://sigilsec.ai/badge/pypi/prompt-armor" alt="Sigil Scan"></a>Run This Scan Yourself
Scan your own packages
Run Sigil locally to audit any package before it touches your codebase.
Early Access
Get cloud scanning, threat intel, and CI/CD integration.
Join 150+ developers on the waitlist.
Get threat intelligence and product updates
Security research, new threat signatures, and product updates. No spam.
Other pypi scans
Believe this result is incorrect? Request a review or see our Terms of Service and Methodology.