AI SecuritySupply ChainResearch

Your AI Stack Just Handed Over Your Root Keys: Inside the litellm PyPI Breach

The popular litellm Python package was compromised on PyPI. Versions 1.82.7 and 1.82.8 contain malicious code that steals cloud credentials, SSH keys, and Kubernetes secrets.


This post was originally published on Trend Micro Research.

TL;DR

The popular litellm Python package was compromised on PyPI. Versions 1.82.7 and 1.82.8 contain malicious code that steals your cloud credentials, SSH keys, and Kubernetes secrets. If you updated your environment on or after 24/03/2026, assume your keys belong to someone else. Stop what you are doing, delete the package, and tell your team to rotate credentials immediately.

A morning surprise courtesy of a sloppy hacker

Imagine this. Your engineers sit down with their coffee, fire up their environments, and their machines instantly crash. That is exactly how the industry discovered the litellm PyPI supply chain attack. The malware contained a bug that spawned an endless loop of child processes — an accidental fork bomb that took down the host machine. If the attackers knew how to write better code, we would not have noticed, and they would still be quietly siphoning your production secrets right now. We got lucky.

According to the security breakdown from FutureSearch, an attacker hijacked the maintainer accounts for the litellm project. They bypassed standard GitHub release protocols and pushed compromised versions directly to PyPI. Because litellm sits between developers and nearly every major LLM endpoint, it gets pulled in as a dependency by everything from basic scripts to advanced coding agents.

The blast radius is staggering. This package saw 3,408,615 downloads yesterday alone, and over 95 million downloads in the last month. If your engineering team builds anything related to AI, they almost certainly pull this package into your environment.

AI security is still just software security

Everyone wants to talk about advanced AI vulnerabilities like prompt injection, data poisoning, and model inversion. Meanwhile, attackers are exploiting the exact same infrastructure weaknesses we have battled for a decade.

The AI technology stack is built on standard, fragile open-source foundations. Threat actors always target the central, weakest link. Why bother engineering a complex LLM jailbreak when a poisoned Python dependency hands over your Kubernetes cluster on a silver platter? We keep treating AI as a completely novel frontier, but the adversaries are simply using the same old supply chain crowbars to break in.

This incident also exposes the absolute stupidity of blindly updating to the latest package versions. The obsession with using the newest patch the second it drops is a massive vulnerability. If your CI/CD pipeline automatically pulls the newest release without a quarantine period, you are automating your own breach. Pin your dependencies to cryptographic hashes. Let someone else’s infrastructure test the newest release for supply chain malware first.

The anatomy of a cloud-native heist

The attackers used a known Python exploit that automatically executes hidden scripts the moment the Python interpreter starts. Your team does not even have to import the compromised library. Just running a completely unrelated script triggers the malware.

Once alive, the payload acts as a highly sophisticated, cloud-centric stealer. It casts a massive net to extract AWS, GCP, and Azure configs, and actively queries your internal cloud metadata to hijack instance roles.

The real nightmare happens in Kubernetes. If the malware detects a service account token, it escalates to a full cluster takeover. It uses the token to steal secrets across every namespace. Worse, it orchestrates a container escape — breaking out of the isolated pod environment to install persistent backdoors directly on your underlying host nodes. Think of it like giving a vendor badge access to your lobby, only to find out they cloned the master key and are currently building a fort in your server room.

Finally, it encrypts your data and ships it to an attacker-controlled server, establishing a secondary connection to checkmarx.zone to deliberately abuse a trusted brand name and bypass your DNS allowlists.

We continuously warn about secret hygiene

This incident exposes a severe architectural flaw in how we build software. We blindly trust open-source registries, but more importantly, we make the attacker’s job incredibly easy once they breach the perimeter. We continuously publish research on these exact attack paths because we see them exploited every single day.

The malware specifically dumps environment variables and hunts for .env files deeply buried in your directories. If your organisation still stores long-lived credentials in environment variables or leaves unencrypted secrets on production disks, you are hand-delivering your infrastructure to attackers.

What to ask your team right now

If you have litellm anywhere in your stack, give your engineering and security teams these immediate directives based on the known Indicators of Compromise (IoCs):

  • Purge the environment. Search for litellm_init.pth and clear all package manager caches.
  • Hunt for the persistence implants. Tell your SOC to look for unauthorised sysmon.service daemons and suspicious temporary files like /tmp/pglog or /tmp/.pg_state.
  • Audit your Kubernetes clusters. Look for anomalous privileged pods matching the node-setup-* pattern in the kube-system namespace.
  • Block outbound traffic. Ensure your network drops all egress attempts to checkmarx.zone and models.litellm.cloud.
  • Assume breach. Force a rotation of SSH keys, cloud provider credentials, and database passwords immediately.

Do not wait for a vendor to issue a critical alert. The attackers already have what they want.

The bill comes due

We built an entire ecosystem on top of fragile trust. The litellm hack is just the latest example of attackers exploiting our reliance on open-source registries and poor secret hygiene. Security is not an afterthought you can outsource entirely to a vulnerability scanner. If you allow developers to vibe-install unverified packages into production while leaving secrets lying around in plaintext, you might as well mail your root keys directly to the hackers and save them the effort.

Sources