Generative AI is reshaping how people work, and security teams are no exception. At TryHackMe, our mission has always been to lower the barrier to entry into cyber security. Now that GenAI is transforming how security teams operate, we see it as our responsibility to equip the next generation of practitioners with the skills to thrive in this new environment. Mainstream adoption may still be 6–12 months away, but the pace of change is rapid, and security professionals can’t afford to be caught unprepared.
How SOC Work will change

A SOC’s core pillars span threat intelligence, detection and response, triage and analysis, and engineering. In some organisations, these are dedicated functions and teams, in other organisations, they are shared across a team. We’ll describe how each pillar works and the positive effect of GenAI on each pillar.
Threat Intelligence
Threat intelligence is about collecting, correlating, and interpreting data to understand potential adversaries and emerging tactics. AI is accelerating this work by automating correlation across vast data sources, building detections directly from threat feeds, and even assisting analysts in proactive hunting. What was once time-consuming and manual is becoming faster and more scalable.
Detection
Detection and response is the backbone of a SOC, turning signals into action. Here, AI models are beginning to suggest new detection rules from historical incidents, and in some cases can generate them in natural language. This not only speeds up detection engineering but also lowers the barrier for analysts to create and refine detections on the fly. For incident response, GenAI can take wide volumes of data used as part of investigations, and automatically enrich, analyse and summarise this to help incident responders better understand attackers' actions.
Triage and Analysis
Triage and analysis determine how quickly and effectively a SOC can separate noise from real threats. AI is changing this by enriching alerts with contextual data, summarising investigations into concise narratives, and prioritising incidents based on asset criticality. Analysts can focus less on sifting through noise and more on driving decisions.
Engineering
SOC engineering underpins the tooling, workflows, and automation that keep operations running. AI is starting to reshape this domain through workflow automation and early experiments in auto-remediation of vulnerabilities.
Skills for Aspiring Practitioners

For newcomers entering security, the landscape will look very different. A lot of core security tooling like SIEMs will already have features around GenAI functionality like natural language search, alert summarisation and more build into them. With this, the day-to-day focus will change too, less time spent on repetitive triage, more time dedicated to digging into complex investigations that actually move the needle.
Analysts will still need a strong understanding of foundational skills, such as networking, web technologies, and operating system internals, to use these new tools effectively and validate AI's output. GenAI still carries the risk of hallucination and producing incorrect output, and without the fundamentals to fall back on, analysts won’t always spot them.
While GenAI also does a great job of summarising and interpreting aggregate data, SOC environments will always have an issue of using multiple vendors with data across disparate systems. With this, it will still be useful for analysts to understand how to use scripting languages like Python to quickly pull data, normalise it and use this perform basic automations.
Skills for Mid-Level and Senior Analysts

For L2 and L3 analysts, the arrival of AI shifts the focus of their work in important ways. Rather than grinding through pivots across half a dozen tools, they’ll increasingly step into the role of investigator making sense of AI-curated insights and connecting the dots between them.
A big part of the job will be deploying and tuning AI-native tools within the SOC stack, ensuring they align with the team’s workflows and actually add value rather than noise. Mid-level analysts will also become the bridge for juniors, teaching them how to use AI responsibly, reinforcing the basics, and making sure they don’t fall into the trap of blindly trusting outputs.
At the same time, L2s and L3s will find themselves concentrating on advanced areas that demand human judgment and creativity: threat hunting, malware analysis, and building resilient detection and response workflows. To do this well, they’ll need to develop a kind of AI literacy knowing where these systems can accelerate their work, and where they might introduce blind spots or risk.
Skills for SOC Managers and Leaders
For SOC managers and leaders, the conversation shifts from working cases to shaping strategy. Their role will be less about chasing individual alerts and far more about deciding where and how AI fits into the bigger picture of the SOC.
Leaders will need fluency across both technology and governance. They’ll be expected to define an AI strategy, build the business case for investment, and ensure that any adoption aligns with risk and compliance requirements. Just as importantly, they’ll need to develop a sense of model trust understanding where AI can be relied on, and where human oversight remains essential.
Metrics will evolve too. Instead of only tracking the volume of alerts or cases closed, leaders will look closely at how AI changes outcomes: dwell time, MTTR, analyst workload, and overall resilience. Their success won’t be measured by whether AI is present in the SOC, but by whether it’s deployed responsibly and delivering measurable impact.
Where GenAI Can Go Wrong

Even though GenAI is rapidly evolving and improving, there are risks to using it within the SOC. If teams lean on it without the right guardrails, the consequences can impact each member of the SOC team and it’s wider operation.
For L1 analysts, the danger lies in trusting the output of AI at face value. If juniors trust AI outputs without having the fundamentals to challenge them, they may miss real threats and spend too much time debugging false positives. Instead of teaching analysts critical thinking and problem solving skills, AI could create a generation of analysts who lack in-depth investigation and analysis skills, forcing seniors to spend extra time correcting errors.
For L2s and L3s, they may find themselves bogged down in more triage, not less, when hallucinations or inaccurate AI outputs flood the queue. They also risk spending more time coaching juniors who lean too heavily on the tools, rather than focusing on advanced investigations and threat hunting. If AI narrows the field of view too aggressively, anomalies that would once have been caught in manual pivots may slip by unnoticed.
For SOC managers and leaders, these ripple effects could make metrics like MTTR and dwell time look better, while hallucinations in fact drive worse outcomes and hidden blind spots. Leaders also face the challenge of accountability - explaining AI decisions in compliance and regulatory contexts where transparency is critical. A poorly defined AI strategy doesn’t just fail to deliver value; it can actively erode trust across the business.
What Will Not Change
Even as AI matures, human judgment will still be irreplaceable. AI can enrich data, automate repetitive tasks, and surface insights, but people must make the final calls. Strong fundamentals remain just as important. Without deep technical grounding, practitioners may not be able to connect the dots across attacker activity and business logic, and will be unable to question or validate what the tools tell them.
The New Core Skills for Cyber Practitioners
With AI changing the way SOCs are working, practitioners skillsets will also need to adapt. Starting with core security knowledge, an understanding of networks, operating systems, web technologies, and cloud. Layered onto this is AI literacy: knowing how these systems work, where they fail, and how to shape their outputs through prompt engineering. Analysts will also need fluency in low-code automation and workflow design, the ability to stitch tools together to streamline investigations. As AI starts to heavily augment the work of the L1, analysts need to build advanced skillsets in threat hunting, malware analysis, and forensics.