AI coding assistants like ChatGPT and GitHub Copilot have become a staple in the developer’s toolkit. They help dev teams move faster, automate boilerplates, and troubleshoot issues on the fly. But there’s a catch. These tools don’t always know what they’re talking about. Like other LLM applications, coding assistants sometimes hallucinate – confidently recommending software packages that don’t actually exist.  

This isn’t just an annoying quirk — it’s a serious security risk that could open the door to malicious attacks exploiting the vulnerability. This technique is known as “slopsquatting”, a twist on supply chain attacks where bad actors register hallucinated package names suggested by AI tools and fill them with malicious code. Also known as “AI package hallucination,” there is an urgent need for stronger security guardrails and for developers and engineers to not overrely on LLMs without proper validation of coding instructions and recommendations.

The GenAI coding tool recommends the package, the developer installs it… and software vendors find themselves with purpose-built malicious code integrated knowingly, if unwittingly, into their products.

This article breaks down what AI package hallucinations are, how slopsquatting works, and how developers can protect themselves.

What is an AI Package Hallucination?

An AI package hallucination occurs when a large language model invents the name of a software package that looks legitimate, but doesn’t exist. For example, when one security researcher asked ChatGPT for NPM packages to help integrate with ArangoDB, it confidently recommended orango-db

The answer sounded entirely plausible. But it was entirely fictional, until the researcher registered it himself as part of a proof-of-concept attack.

These hallucinations happen because LLMs are trained to predict what “sounds right” based on patterns in their training data – not to fact-check. If a package name fits the syntax and context, the model may offer it up, even if it never existed.

Because GenAI coding assistant responses are fluent and authoritative, developers tend to assume that they’re accurate. If they don’t independently verify the package, a developer might unknowingly install a package the LLM made up. And these hallucinations don’t just disappear – attackers are turning them into entry points.

What is Slopsquatting?

Slopsquatting was a term coined by security researcher Seth Larson to describe a tactic that emerged during the early wave of AI-assisted coding. It referred to attackers exploiting AI hallucinations—specifically, when AI tools invented non-existent package names. Threat actors would register these fake packages and fill them with malicious code. Though once a notable concern, awareness of slopsquatting has since grown, and countermeasures have become more common in package ecosystems. 

Unlike its better-known counterpart typosquatting, which counts on users misidentifying very slight variations on legitimate URLs, slopsquatting doesn’t rely on human error. It exploits machine error. When an LLM recommends a non-existent package like the above-mentioned orango-db, an attacker can then register that name on a public repository like npm or PyPI. The next developer who asks a similar question might get the same hallucinated package. Only now, it exists. And it’s dangerous.

As Lasso’s research on AI package hallucination has shown, LLMs often repeat the same hallucinations across different queries, users, and sessions. This makes it possible for attackers to weaponize these suggestions at scale – and slip past even vigilant developers.

Why This Threat Is Real – and Why It Matters

AI hallucinations aren’t just rare glitches, they’re surprisingly common. In a recent study of 16 code-generating AI models, nearly 1 in 5 package suggestions (19.7%) pointed to software that didn’t exist.

This high frequency matters because every hallucinated package is a potential target for slopsquatting. And with tens of thousands of developers using AI coding tools daily, even a small number of hallucinated names can slip into circulation and become attack vectors at scale.

What makes slopsquatted packages especially dangerous is where they show up: in trusted parts of the development workflow – AI-assisted pair programming, CI pipelines, even automated security tools that suggest fixes. This means that what started as AI hallucinations can silently propagate into production systems if they aren’t caught early.

How to Stay Safe 

You can’t prevent AI models from hallucinating – but you can protect your pipeline from what they invent. Whether you’re writing code or securing it, here’s my advice to stay ahead of slopsquatting:

For Developers:

Don’t assume AI suggestions are vetted. If a package looks unfamiliar, check the registry. Look at the publish date, maintainers, and download history. If it popped up recently and isn’t backed by a known organization, proceed with caution.

For Security Teams:

Treat hallucinated packages as a new class of supply chain risk. Monitor installs in CI/CD, add automated checks for newly published or low-reputation packages, and audit metadata before anything hits production.

For AI Tool Builders:

Consider integrating real-time validation to flag hallucinated packages. If a suggested dependency doesn’t exist or has no usage history, prompt the user before proceeding.

The Bottom Line

AI coding tools and GenAI chatbots are reshaping how we write and deploy software – but they’re also introducing risks that traditional defenses aren’t designed to catch. Slopsquatting exploits the trust developers place in these tools – the assumption that if a coding assistant suggests a package, it must be safe and real.

But the solution isn’t to stop using AI to code. It’s to use it wisely. Developers need to verify what they install. Security teams should monitor what gets deployed. And toolmakers should build in safeguards from the get-go. Because if we’re going to rely on GenAI, we need protections built for the scale and speed it brings.