Skip to main content
Scans/pypi/llm-std-lib

llm-std-lib

pypi

Share

Summary

llm-std-lib v1.0.1 was classified as CRITICAL RISK with a risk score of 122. Sigil detected 18 findings across 117 files, covering phases including provenance, network exfiltration, obfuscation. Review the findings below before installing this package.

Package description: Python library for standardization, optimization and monitoring of LLM API calls

CRITICAL RISK(122)

v1.0.1

28 March 2026, 03:10 UTC

by Sigil Bot

Risk Score

122

Findings

18

Files Scanned

117

Provenance

Findings by Phase

Phase Ordering

Phases are ordered by criticality, with the most dangerous at the top. Click any phase header to expand or collapse its findings. Critical phases are expanded by default.

Badge

Sigil scan badge for pypi/llm-std-lib

Markdown

[![Sigil Scan](https://sigilsec.ai/badge/pypi/llm-std-lib)](https://sigilsec.ai/scans/84D824AF-15ED-4521-9B90-BA3B5829D117)

HTML

<a href="https://sigilsec.ai/scans/84D824AF-15ED-4521-9B90-BA3B5829D117"><img src="https://sigilsec.ai/badge/pypi/llm-std-lib" alt="Sigil Scan"></a>

Run This Scan Yourself

Scan your own packages

Run Sigil locally to audit any package before it touches your codebase.

curl -sSL https://sigilsec.ai/install.sh | sh
Read the docs →Free. Apache 2.0.

Early Access

Get cloud scanning, threat intel, and CI/CD integration.

Join 150+ developers on the waitlist.

Get threat intelligence and product updates

Security research, new threat signatures, and product updates. No spam.

Other pypi scans

Believe this result is incorrect? Request a review or see our Terms of Service and Methodology.

Scanned bySigil Bot