OWASP, the organization known for its list of the top 10 security vulnerabilities in software, has just published version 1.0 of a new top 10 list devoted to risks related to large language model (LLM) applications.
“The frenzy of interest of Large Language Models (LLMs) following of mass-market pretrained chatbots in late 2022 has been remarkable. Businesses, eager to harness the potential of LLMs, are rapidly integrating them into their operations and client facing offerings. Yet, the breakneck speed at which LLMs are being adopted has outpaced the establishment of comprehensive security protocols, leaving many applications vulnerable to high-risk issues,” Steve Wilson, project lead for the list, wrote in the introduction to the list.
According to the organization, this top 10 list was designed for developers, data scientists, and security professionals who are building applications that use LLM technologies. They worked with almost 500 experts in the field in order to create this list.
The top 10 vulnerabilities, according to the list, are:
- Prompt Injection
- Insecure Output Handling
- Training Data Poisoning
- Model Denial of Service
- Supply Chain Vulnerabilities
- Sensitive Information Disclosure
- Insecure Plugin Design
- Excessive Agency
- Overreliance
- Model Theft
“This first version of the list will not be our last. We expect to update it on a periodic basis to keep pace with the state of the industry. We will be working with the broader community to push the state of the art, and creating more educational materials for a range of uses. We also seek to collaborate with standards bodies and governments on AI security topics. We welcome you to join our group and contribute,” Wilson wrote.
For more information on each vulnerability, view the list here.