OWASP Top 10 Risks for LLMs visualized
LLM01: Prompt Injection
Crafty inputs can trick LLMs into unintended behaviors. This includes overwriting system prompts or manipulating external source inputs.
LLM02: Insecure Output Handling
When LLM outputs are blindly trusted, it risks system exposure with potential threats like XSS, CSRF, and privilege escalation.
LLM03: Training Data Poisoning
LLMs are at risk if their training data is altered, which might introduce security risks, biases, or affect their effectiveness.
LLM04: Model Denial of Service
LLMs are susceptible to attacks that strain resources, amplified by their resource-heavy nature and unpredictable user inputs.
LLM05: Supply Chain Vulnerabilities
LLM systems can be undermined by vulnerabilities from third-party datasets, plugins, or pre-trained models.
LLM06: Sensitive Information Disclosure
LLMs might unintentionally disclose sensitive information such as PII.
LLM07: Insecure Plugin Design
Insecure plugin designs, especially with poor input and access controls, can be easily exploited.
LLM08: Excessive Agency
Excessive functionalities or permissions given to LLMs might lead to unexpected outcomes.
LLM09: Overreliance
Over-relying on LLMs without proper checks may cause misinformation, legal challenges, and other vulnerabilities.
LLM10: Model Theft
Unauthorized access to LLM models can result in financial losses, competitive disadvantages, and exposure of confidential info.