Last week, OWASP released the 2025 edition of the OWASP Top 10 for LLM Applications, highlighting the rapid advancements in large language model (LLM) capabilities, their expanding use cases, and the evolving security risks. This updated framework aims to help organizations identify and mitigate the most critical security threats in LLM applications throughout the entire AI lifecycle—from development to deployment.

Source: OWASP

New Vulnerabilities Introduced

The 2025 list brings attention to three new vulnerabilities that mirror the current state-of-the-art applications and emerging attack vectors:

LLM07:2025 System Prompt Leakage

System prompts often contain essential instructions or sensitive information that guide the LLM's behavior. However, these prompts can inadvertently be exposed in the model's responses. To mitigate this risk, developers should:

  • Avoid embedding sensitive information within system prompts.
  • Implement robust evaluations and guardrails to prevent prompt leakage.
  • Regularly test the system for potential exposure points.

LLM08:2025 Vector and Embedding Weaknesses

LLM applications frequently use vector databases and embeddings to enhance their functionality. Without proper access controls, these systems can become vulnerable to unauthorized data exposure. Developers should:

  • Integrate stringent access control measures into Retrieval-Augmented Generation (RAG) systems.
  • Ensure that LLM responses do not disclose information from documents that users are not authorized to access.
  • Monitor and audit access patterns to detect and prevent unauthorized data retrieval.

LLM09:2025 Misinformation

LLMs have the potential to generate content that is not factually accurate, leading to the spread of misinformation. To address this challenge, developers should:

  • Utilize techniques like RAG to ground responses in verified data sources.
  • Conduct thorough evaluations using metrics such as factual consistency.
  • Implement feedback mechanisms to correct inaccuracies over time.

Vulnerabilities Removed or Reclassified

In addition to the new entries, several vulnerabilities have been removed or reclassified in the 2025 list:

  • LLM07: Insecure Plugin Design has been removed but aspects of it are now covered under LLM06:2025 Excessive Agency.
  • LLM09: Overreliance is no longer listed but is partially addressed in LLM09:2025 Misinformation.
  • LLM10: Model Theft has been removed, with related concerns included in LLM10:2025 Unbounded Consumption.

The Imperative of Real-World Data in LLM Security

The evolution of the OWASP Top 10 for LLM Applications is driven by a deeper understanding of existing risks and critical updates informed by how LLMs are being utilized in real-world scenarios.

For instance, the addition of System Prompt Leakage as a top vulnerability reflects findings from our recent research. Last month, we published the "State of Attacks on GenAI" report - backed by comprehensive analysis of real-world data from over 2,000 LLM applications. This industry-first report sheds light on the evolving landscape of AI security threats, moving beyond hypothetical risks to uncover actual attack patterns and observed vulnerabilities.

Download the full report here

A significant takeaway from our analysis of real-world attacks is the limitations of prompt hardening as a standalone defense. Despite efforts to strengthen system prompts and align instructions, our research uncovered numerous examples of adversaries successfully bypassing these safeguards with surprising ease. This underscores the critical need for robust, multi-layered security strategies that extend beyond prompt-level measures.

As OWASP has highlighted:

"The inclusion of System Prompt Leakage addresses a vulnerability with real-world exploits that the community has been increasingly concerned about. Many developers assumed that system prompts were securely isolated, but recent incidents have demonstrated that this information can inadvertently be exposed."

The unprecedented pace of LLM advancement is reflected in OWASP's annual updates to its LLM Top 10 list (compared to every 4 years for the OWASP Top Ten for traditional web applications). This accelerated review cycle underscores the dynamic nature of LLM security challenges and the critical importance of staying current with emerging threats.

Moving Forward: Empowering Secure AI with Pillar Security

In this evolving landscape, Pillar Security is dedicated to helping organizations develop, deploy, and use AI applications securely. By addressing vulnerabilities across the entire AI lifecycle—from development through production to usage—our platform ensures that businesses can innovate with confidence.

Pillar’s adaptive platform integrates seamlessly with any infrastructure, offering support for model-agnostic, self-hosted, and cloud deployments, as well as compatibility with leading foundation model providers. Key features include:

  • Comprehensive AI Asset Mapping: Combines full AI fingerprinting and LLM asset inventory to provide security and compliance teams with unparalleled visibility into AI operations. This includes seamless integration with code repositories, cloud environments, and ML/data platforms for mapping AI assets, monitoring LLM application attributes, tracking changes, and mapping system interactions to ensure secure and compliant operations.
  • Proactive Threat Mitigation: With automated red teaming and runtime guardrails, Pillar Security identifies and neutralizes AI-specific threats in real-time, preventing potential breaches before they escalate.
  • Enhanced Governance and Compliance: Our platform provides comprehensive oversight and ensures adherence to industry standards and regulations, helping organizations maintain robust governance.
  • Data-Driven Optimization: Continuously refined with real-world AI data, our solutions deliver precise risk detection, improved data security, and comprehensive compliance support.
  • Flexible Deployment Options: Designed to adapt to various operational models, our solutions cater to organizations using centralized cloud infrastructures or decentralized local deployments.

By offering end-to-end AI lifecycle security, Pillar Security empowers businesses to innovate while protecting their critical assets. Our commitment is to provide organizations with the tools and expertise they need to build and maintain secure, resilient AI applications, ensuring peace of mind in an ever-evolving threat landscape.

Subscribe and get the latest security updates

Back to blog