OWASP, the organization known for its list of the top 10 security vulnerabilities in software, has just published version 1.0 of a new top 10 list devoted to risks related to large language model (LLM) applications. 

“The frenzy of interest of Large Language Models (LLMs) following of mass-market pretrained chatbots in late 2022 has been remarkable. Businesses, eager to harness the potential of LLMs, are rapidly integrating them into their operations and client facing offerings. Yet, the breakneck speed at which LLMs are being adopted has outpaced the establishment of comprehensive security protocols, leaving many applications vulnerable to high-risk issues,” Steve Wilson, project lead for the list, wrote in the introduction to the list

According to the organization, this top 10 list was designed for developers, data scientists, and security professionals who are building applications that use LLM technologies. They worked with almost 500 experts in the field in order to create this list. 

The top 10 vulnerabilities, according to the list, are:

  1. Prompt Injection
  2. Insecure Output Handling
  3. Training Data Poisoning
  4. Model Denial of Service
  5. Supply Chain Vulnerabilities
  6. Sensitive Information Disclosure
  7. Insecure Plugin Design
  8. Excessive Agency 
  9. Overreliance
  10. Model Theft

“This first version of the list will not be our last.  We expect to update it on a periodic basis to keep pace with the state of the industry. We will be working with the broader community to push the state of the art, and creating more educational materials for a range of uses. We also seek to collaborate with standards bodies and governments on AI security topics.  We welcome you to join our group and contribute,” Wilson wrote.

For more information on each vulnerability, view the list here.