To access material, start machines and answer questions login.
In the Understanding Supply Chains room, you learned about the JFrog researchers who discovered approximately 100 malicious models on Hugging Face. Now you are about to investigate one yourself.
Yesterday, the CEO at TryTrainMe received an alarming email:
Subject: Your systems have been compromised! We've had access to your servers for 3 weeks through your "AI-powered" code reviewer. Check your model loading code. You might want to scan those "harmless" .pkl files you downloaded. – A concerned security researcher
Your security team has been called in. You will investigate four major attack vectors: malicious model serialisation (pickle), dependency confusion, model repository manipulation, and API provider compromise. Starting with a suspicious model file on the lab VM, you will trace how attackers exploit every layer of the supply chain.
Learning Objectives
- Explain how Python's pickle serialisation enables arbitrary code execution through the
__reduce__method - Investigate a malicious model file using safe analysis techniques (pickletools)
- Describe how and attacks compromise package installations
- Identify the warning signs of a compromised model repository
- Recognise the attack vectors specific to -consumed models: silent updates, key compromise, and prompt template injection.
Prerequisites
- Completed Understanding Supply Chains
- Basic Python knowledge (variables, functions, classes)
- Comfortable using the terminal
I'm ready to investigate!
Ready to learn Cyber Security?
The Supply Chain Attack Vectors room is only available for premium users. Signup now to access more than 500 free rooms and learn cyber security through a fun, interactive learning environment.
Already have an account? Log in