Advent of Cyber 2025

Daily festive challenges and 30% off annual subscriptions

21days
:
19hr
:
59min
:
06sec
Subscribe now
Back to all modules

Attacking LLMs

Attacking LLMs icon

Learn to identify and exploit LLM vulnerabilities, covering prompt injection, insecure output handling, and model poisoning.

In this module, we cover practical attacks against systems that use large language models, including prompt injection, unsafe output handling, and model poisoning . You will learn how crafted inputs and careless handling of model output can expose secrets or trigger unauthorised actions, and how poisoned training data can cause persistent failures. Each topic includes hands-on exercises and realistic scenarios that show how small issues can be linked into larger attack paths. By the end, participants can build concise proof of concept attacks and suggest clear, practical mitigations.

Attacking LLMs icon

What are modules?

A learning pathway is made up of modules, and a module is made of bite-sized rooms (think of a room like a mini security lab).

Module tree diagram

We use cookies to ensure you get the best user experience. For more information contact us.

Read more