Skip to main content
Feature
BLOG • 6 min read

What AI Security Skills Should You Learn First?

AI security has arrived as a serious discipline faster than most people expected. The ISC2 2025 Cybersecurity Workforce Study identified AI security as the single largest skills gap in the field, cited by 41% of organisations as a critical shortfall. Job postings specifically asking for AI security skills now pay an average of 28% more than equivalent roles without that requirement. The demand is real and it is growing faster than supply.

The challenge for anyone trying to break in is that AI security is genuinely new. The curriculum has not been fully written. Most guides either assume you already have an AI background or treat it as a vague future concern rather than something with specific, learnable skills. This guide answers the practical question: if you are starting now, what should you actually learn first, and in what order?


Why AI Security Is Different

Before covering the specific skills, it helps to understand what makes AI security distinct from traditional cyber security rather than just a new flavour of the same thing.

Traditional software has deterministic logic. You can audit a decision tree, trace an input to an output, and reason about the system's behaviour with certainty. Large Language Models are probabilistic and context-sensitive. The same input can produce different outputs. The model cannot reliably distinguish between trusted instructions from a developer and untrusted input from a user, because both appear as natural language in the same context window. That is an entirely new class of vulnerability with no direct equivalent in traditional software security.

The attack surface of AI systems also spans multiple layers that traditional security does not cover: the training data, the model weights, the inference pipeline, the API interface, the retrieval system (in RAG architectures), and the agent framework if the model is given tools to act with. Securing an AI system requires understanding all of these layers, which is why AI security practitioners need a combination of security knowledge and AI fundamentals that neither discipline produces on its own.


The Foundation You Need Before AI Security Makes Sense

AI security is not a starting point. It is a specialisation that builds on top of existing skills. Before the AI-specific content is meaningful, two foundations need to be in place.

Cyber security fundamentals. Understanding how attacks work, what threat modelling involves, how APIs can be exploited, and how security controls are designed and evaluated is the frame that makes AI security concepts meaningful. Someone who does not understand injection vulnerabilities in traditional web applications will find prompt injection harder to grasp than someone who does. Someone without API security fundamentals will find LLM API security difficult to reason about. The Cyber Security 101 path on TryHackMe covers the right foundational layer if you are starting from scratch.

Basic understanding of how AI systems work. You do not need to understand the mathematics of transformer architectures. You do need to understand what a large language model is, how training data shapes model behaviour, what a context window is and why it matters, and how RAG (Retrieval-Augmented Generation) systems retrieve and inject external data into model context. Without this, AI-specific attack techniques are just names rather than concepts you can apply and reason about.


The AI Security Skills to Learn First

1. Prompt Injection

Prompt injection is the number one security threat in LLM applications according to the OWASP LLM Top 10. It is the most fundamental and most widely exploitable vulnerability class in deployed AI systems, and it is the skill that every AI security practitioner needs to understand before anything else.

The attack exploits the fact that an LLM cannot distinguish trusted system instructions from untrusted user input. Both appear as natural language. A direct prompt injection crafts user input that overrides the system prompt or bypasses safety controls, commonly called jailbreaking. An indirect prompt injection is more dangerous: malicious instructions are embedded in external content the model retrieves or processes, such as a webpage a browser agent visits or a document a summarisation tool processes. The model faithfully follows those instructions without the user or developer realising an attack has occurred.

Understanding prompt injection means being able to construct and test injections, recognise when a deployed system is vulnerable, and implement input isolation and validation controls to reduce exposure. Both the offensive and defensive angles are in scope.

2. LLM Vulnerability Classes

Beyond prompt injection, the OWASP LLM Top 10 defines the attack surface systematically. The classes most important to learn first are:

Sensitive information disclosure: LLMs trained on data that contained private information can reproduce that information in responses. Models can also be induced to reveal system prompts or internal configuration through prompt engineering.

Data and model poisoning: Attackers who can influence the training data or fine-tuning process can embed backdoors or biased behaviours into model weights. Understanding how this works is essential for AI supply chain security.

Excessive agency: Agentic AI systems that are given tools and the ability to act on behalf of users create attack surfaces where a successful prompt injection does not just produce a bad response but takes real-world actions with real consequences.

Insecure output handling: Applications that trust and process LLM output without validation, injecting it into SQL queries, shell commands, or other interpreters, create secondary injection vulnerabilities downstream of the model.

3. AI Threat Modelling

Traditional threat modelling frameworks like STRIDE were built for deterministic systems and do not map cleanly to AI systems. MITRE ATLAS (Adversarial Threat Landscape for AI Systems) is the AI-specific framework that maps adversary tactics and techniques against AI systems, directly analogous to MITRE ATT&CK for traditional environments.

Learning AI threat modelling means being able to take a deployed AI system, enumerate its attack surfaces across training, inference, API, and agent layers, map those surfaces to MITRE ATLAS techniques, and identify which controls are missing or insufficient. This is the skill that makes AI security practitioners valuable at the architecture and review stage rather than only after deployment.

4. AI Supply Chain Security

Most organisations deploying AI today are not training their own models. They are deploying pre-trained models from third-party providers, building on top of APIs, using fine-tuning with their own data, and integrating retrieval systems that pull from internal knowledge bases. Each of these creates supply chain dependencies, and each of those dependencies is an attack surface.

Data provenance (understanding where training data came from and whether it can be trusted), model integrity (verifying that a model has not been tampered with), and secure deployment pipelines are the core competencies of AI supply chain security. This area is where existing security engineering skills transfer most directly into AI security work.

5. AI Forensics and Incident Response

When an AI system is compromised or produces unexpected behaviour, investigating what happened requires techniques specific to AI environments. Determining whether a model was poisoned, whether a prompt injection triggered an action, or whether sensitive data was extracted through careful prompting is different from traditional forensic investigation.

This is a more advanced skill that follows the others, but it is worth knowing it exists as a defined discipline so your learning sequence has a clear direction beyond the foundational skills.


How This Maps to Existing Security Career Paths

AI security is not a separate career track. It is a layer that sits on top of existing specialisations.

A penetration tester adds LLM red teaming, prompt injection testing, and MITRE ATLAS-based assessments to their existing toolkit. A SOC analyst adds detection of AI-powered attacks, prompt injection in logs, and investigation of agentic AI misbehaviour to their investigation workflows. A security engineer adds AI threat modelling, secure LLM deployment practices, and supply chain verification to their architecture reviews.

This means your existing specialisation determines which AI security skills are most immediately valuable to you, and which path through the material is most efficient.


Start With TryHackMe's AI Security Path

TryHackMe launched a dedicated AI Security path in April 2026, making it one of the most current structured AI security learning resources available. The path covers AI/ML Security Threats, AI Models and Data, Prompt Engineering, LLM Security, AI Threat Modelling, AI System Reconnaissance, AI Forensics, AI Supply Chain Security, and RAG Security fundamentals.

Each module puts you inside a live environment rather than just explaining concepts. The Prompt Engineering room uses a live AI agent called PromptSec that grades your prompt construction in real time. The LLM Security room covers direct and indirect injection techniques hands-on. The AI Threat Modelling module teaches MITRE ATLAS-based assessment in a structured lab environment.

The path is structured to follow the learning sequence described in this guide, covering the foundational AI knowledge before the offensive and defensive techniques, and building toward the more advanced supply chain and forensics content once the core skills are in place.

authorNick O'Grady
Apr 17, 2026

Join over 640 organisations upskilling their
workforce with TryHackMe

We use cookies to ensure you get the best user experience. For more information see our cookie policy.