Summary
txtai v9.7.0 was classified as CRITICAL RISK with a risk score of 124. Sigil detected 12 findings across 270 files, covering phases including code patterns, obfuscation, network exfiltration. Review the findings below before installing this package.
Package description: All-in-one open-source AI framework for semantic search, LLM orchestration and language model workflows
v9.7.0
20 March 2026, 17:26 UTC
by Sigil Bot
Risk Score
124
Findings
12
Files Scanned
270
Provenance
Findings by Phase
Phase Ordering
Phases are ordered by criticality, with the most dangerous at the top. Click any phase header to expand or collapse its findings. Critical phases are expanded by default.
Badge
Markdown
[](https://sigilsec.ai/scans/7DEFF782-F3A3-41D6-A0E5-41F648132DD6)HTML
<a href="https://sigilsec.ai/scans/7DEFF782-F3A3-41D6-A0E5-41F648132DD6"><img src="https://sigilsec.ai/badge/pypi/txtai" alt="Sigil Scan"></a>Run This Scan Yourself
Scan your own packages
Run Sigil locally to audit any package before it touches your codebase.
Early Access
Get cloud scanning, threat intel, and CI/CD integration.
Join 150+ developers on the waitlist.
Get threat intelligence and product updates
Security research, new threat signatures, and product updates. No spam.
Other pypi scans
Believe this result is incorrect? Request a review or see our Terms of Service and Methodology.