Skip to main content
Back to all modules

Prompt Security

Prompt Security icon

Learn how attackers manipulate AI models through malicious inputs, and how to stop them.

This module covers one of the most prevalent AI attack classes: prompt injection. Learners begin with direct injection techniques before moving on to jailbreaking and instruction smuggling via external content. The module then shifts to the defensive side, covering hardening techniques including filters, guards, and template isolation. Two challenge rooms provide hands-on red experience, reinforcing both attack recognition and mitigation strategies.

Prompt Security icon

We use cookies to ensure you get the best user experience. For more information see our cookie policy.