To access material, start machines and answer questions login.
Modern systems depend heavily on the quality and trustworthiness of their data and model components. When attackers compromise or model parameters, they can inject hidden vulnerabilities, manipulate predictions, or bias outputs. In this room, you'll explore how these attacks work and how to detect and mitigate them using practical techniques.
Learning Objectives
- Understand how compromised datasets or model components can lead to security risks.
- Examine common ways adversaries use to introduce malicious inputs during training or fine-tuning.
- Assess vulnerabilities in externally sourced datasets, pre-trained models, and third-party libraries.
- Practice through the eyes of an attacker.
Prerequisites
Data and are specialised threats within the broader field of machine learning security. To get the most out of this room, you should have a foundational understanding of how machine learning models are trained and deployed, as well as the basics of data preprocessing and model evaluation. Additionally, you should be familiar with general security principles related to supply chain and input validation.
Set up your virtual environment
Please click the Start Machine button to boot up the . It will take approximately 3-4 minutes to load and warm up all the models. You will need the in Task 4, so by the time you reach that task, the Lab page will be fully ready for use.
I have successfully started the machine.
Ready to learn Cyber Security?
The Data Integrity & Model Poisoning room is only available for premium users. Signup now to access more than 500 free rooms and learn cyber security through a fun, interactive learning environment.
Already have an account? Log in
