Cisco AI Defense: AI Runtime Protection

Protect your AI applications in production

Automatically configure guardrails to address the specific vulnerabilities of each model.

Superior protection of AI applications

AI Runtime protects production applications from attacks and undesired responses in real time using guardrails that are automatically configured to the vulnerabilities of each model, and are identified with AI Model and Application Validation.

AI Runtime GUI

Block malicious inputs

Attacks on AI systems are increasing in frequency and sophistication, especially as more data is connected. AI Runtime inspects every input and automatically blocks malicious payloads before they can cause damage. Common attacks include prompt injection, prompt extraction, denial of service (DoS), and command execution. The component also stops sensitive data, such as Personally Identifiable Information (PII), from reaching your model.

Help ensure safe model outputs

AI models will generate undesired responses due to malicious and inadvertent actions. AI Runtime scans model outputs to ensure there is no sensitive information, hallucinations, or other harmful content. Responses that fall outside an organization's standards will be blocked. This includes sensitive data from fine-tuning or connected databases used for retrieval-augmented generation (RAG).

Customize policies to fit your use case

AI models are used in a variety of industries and use cases, and require different guardrails. AI Runtime offers hundreds of out-of-the-box protections that can be customized to each model's vulnerabilities with AI Validation. Rules can be further tailored to your organization's standards, such as tolerance for explicit language and what constitutes sensitive information. 

Deploy your AI applications with confidence

Network-level visibility and enforcement

With visibility and control over traffic on the network, Cisco can detect and block malicious and undesired AI traffic using a multitude of enforcement points.

Model and application agnostic security

AI Runtime protects your generative AI applications, including chatbots, retrieval augmented generation (RAG) apps, and AI agents. It provides native support for your proprietary, commercial, and open-source AI applications.

Lightning-fast protection for your critical applications

AI Runtime is a low-latency service with high availability and bandwidth for your most demanding enterprise applications.

Achieve AI security excellence in your organization

AI Defense makes it easy to comply with AI security standards, including the OWASP Top 10 for LLM Applications. Learn more  about individual AI risks, including how they map to standards from MITRE, NIST, and OWASP, in our AI security taxonomy.

Test the models that power your application

Foundation models

Foundation models are at the core of most AI applications today, either modified with fine-tuning or purpose-built. Learn what challenges need to be addressed to keep models safe and secure.

RAG applications

Retrieval-augmented generation is quickly becoming a standard to add rich context to LLM applications. Learn about the specific security and safety implications of RAG.

AI chatbots and agents

Chatbots are a popular LLM application, and autonomous agents that take actions on behalf of users are starting to emerge. Learn about their security and safety risks.


Additional resources

AI safety and security taxonomy

Understand the generative AI threat landscape with definitions, mitigations, and standards classifications.

AI security research and threat intelligence

See the latest research and analysis of AI exploits that also inform our detections. 

AI security reference architectures

Secure design patterns and practices for teams developing LLM-powered applications. 

Cisco's responsible AI principles

Cisco is dedicated to securing artificial intelligence and emerging technologies. 

The enterprise choice for AI security

Close the AI security gap and unblock your AI transformation with comprehensive protection across your environment.