HawkScan Test Info for LLM Injection

LLM Injection

Reference

Plugin Id: 40049 | CWE: 943

Remediation

To mitigate LLM Injection vulnerabilities, implement the following security measures:

  1. Input Sanitization: Sanitize and validate all user inputs before sending them to LLM endpoints. Remove or escape potentially malicious prompt injection attempts.

  2. Prompt Template Security: Use secure prompt templates that clearly separate user input from system instructions. Implement input length limits and content filtering.

  3. Response Filtering: Monitor LLM responses for signs of successful prompt injection, such as system prompt disclosure or jailbreak indicators.

  4. Principle of Least Privilege: Limit the capabilities and access levels of LLM systems. Avoid giving LLMs access to sensitive data or system functions.

About

LLM Injection attacks target Large Language Model endpoints by manipulating user prompts to override system instructions, extract sensitive information, or cause the model to behave in unintended ways. This corresponds to OWASP LLM Top 10 2025 - LLM01: Prompt Injection.

Risks

LLM Injection attacks can result in:

  • System prompt disclosure and configuration leaks
  • Jailbreaking model safety restrictions
  • Unauthorized data access and extraction
  • Model manipulation for malicious purposes
  • Bypass of content filtering and safety measures