First published as a whitepaper in late 2024, the 2025 OWASP Top 10 for LLM Applications represents yet another significant milestone by OWASP, made possible through the contributions of numerous experts in AI, cybersecurity, cloud technologies, and beyond-including Mend.io Head of AI, Bar-El Tayouri.
LLMs remain a relatively new technology, but they are rapidly evolving, and the OWASP Top 10 for LLM Applications is evolving alongside them. Unlike other OWASP Top 10 lists, this latest version is not ranked based on real-world exploitation frequency. However, it incorporates increased feedback from real-world use cases, with only three categories remaining unchanged from the 2023 version.
Below is a concise overview of each vulnerability and its potential impact. The original report provides further details on mitigation strategies, prevention techniques, and attack scenarios.
OWASP Top 10 for LLM Applications
LLM01: Prompt Injection
Prompt injection occurs when maliciously crafted inputs manipulate an LLM into performing unintended actions, exposing sensitive data, or executing unauthorized operations such as remote code execution. This vulnerability is particularly concerning because it exploits the fundamental design of LLMs rather than a flaw that can be patched.
Prompt injection falls into two primary categories:
- Direct prompt injection – Attackers craft prompts to bypass built-in system constraints, such as the “Do Anything Now” (DAN) attack, where role-playing techniques trick an LLM into ignoring its security restrictions.
- Indirect prompt injection – A malicious entity embeds harmful prompts in external data sources. LLMs process these hidden instructions, potentially leading to unintended actions. A notable example involved a resume containing an invisible prompt that forced an LLM to recommend the applicant for a job.
LLM02: Sensitive Information Disclosure
LLMs may inadvertently expose sensitive data, including PII, financial details, health records, security credentials, and proprietary business information. Poorly configured models embedded in applications can also reveal proprietary algorithms, risking intellectual property (IP) breaches.
LLM03: Supply Chain Vulnerabilities
Most LLMs are not built from scratch but rely on pre-existing technologies, third-party components, and external datasets (e.g., Hugging Face models). These dependencies introduce risks such as data poisoning, backdoors, and malware embedded in pre-trained models.
LLM04: Data and Model Poisoning
Data poisoning occurs when malicious inputs are used to manipulate an LLM during pre-training, fine-tuning, or augmentation (e.g., RAG). This can degrade model performance, introduce biases, security vulnerabilities, or even ethical concerns.
Similarly, model poisoning happens when attackers introduce malicious alterations into open-source models, potentially inserting backdoors or hidden exploits.
LLM05: Improper Output Handling
When LLM-generated outputs are processed without proper validation or sanitization, vulnerabilities such as cross-site scripting (XSS) and remote code execution (RCE) may occur. For instance, a malicious review containing an embedded script could be executed in a user’s browser if the LLM summary fails to sanitize the output.
LLM06: Excessive Agency
Excessive agency occurs when an LLM has more permissions, functionality, or autonomy than necessary. Examples include:
- Overly broad file access (e.g., read permissions that also allow write/deletion capabilities).
- Granting access to all user files instead of only the intended user.
- Allowing an LLM to perform critical actions (e.g., deleting files) without explicit user consent.
LLM07: System Prompt Leakage
System prompts help guide LLM behavior, but they may contain sensitive information or security-critical controls (such as authentication mechanisms). If an attacker extracts these prompts, they could manipulate the system or bypass security measures.
LLM08: Vector and Embedding Weaknesses
When using retrieval-augmented generation (RAG), vulnerabilities such as unauthorized access, data leakage, cross-context information exposure, embedding inversion attacks, and knowledge conflicts may arise, allowing attackers to manipulate outputs or extract hidden knowledge.
LLM09: Misinformation
LLMs are not infallible – they can generate misinformation due to:
- Biases in training data.
- Hallucination of outputs, where the model fabricates plausible but incorrect information.
- Public perception issues, where users trust LLMs as authoritative sources despite inherent limitations.
For instance, LLMs may generate incorrect legal interpretations or mathematical errors while maintaining a high level of confidence in their outputs.
LLM10: Unbounded Consumption
LLMs require significant compute resources, and unrestricted access can lead to denial-of-service (DoS) attacks, increased operational costs, model theft, and service degradation. Additionally, malicious actors may exploit LLM infrastructure for activities such as cryptocurrency mining.
Best Practices for Securing LLMs
Security principles for LLM applications align with standard application security practices, including:
- Input validation and sanitization.
- Red-teaming and adversarial testing.
- AI Bill of Materials (AI BOM) for tracking dependencies.
- Principles of least privilege and zero trust.
- User and developer education on AI security risks.
By implementing these best practices, organizations can significantly reduce the risks associated with LLM-based applications while ensuring secure and responsible AI adoption.







