In late 2024 and throughout 2025, multiple incident response reports described the same uncomfortable outcome: phishing campaigns that were detected early still resulted in full organisational breaches.
These were not zero-day exploits or stealthy malware campaigns. In many cases, alerts fired exactly as designed. The failure happened later, during investigation, triage, and escalation.
This Hack News article examines one common breach path, reconstructed from real-world disclosures and incident response write-ups, to explain how a phishing campaign progressed into a full breach, and why alerts alone were not enough to stop it.
The real-world basis for this breach pattern
This attack path is grounded in:
- CISA advisories on credential-based intrusions and phishing-led access
- ENISA reporting on identity abuse and persistence after initial compromise
- Incident response analyses from multiple security vendors describing phishing-to-cloud and phishing-to-identity takeover cases
These sources consistently describe the same progression: phishing → valid access → slow escalation → delayed detection.
This is not a hypothetical chain. It reflects how many modern breaches actually unfold.
Stage 1: Phishing delivers valid access, not malware
In these incidents, phishing emails were designed to harvest credentials or session tokens rather than deliver malicious payloads.
Common techniques included:
- credential harvesting pages imitating SaaS login portals
- MFA fatigue attacks using repeated push notifications
- OAuth or consent-based phishing granting long-lived access
Because the outcome was valid authentication, endpoint tooling often saw nothing malicious. Email security flagged suspicious messages, but the technical impact appeared limited.
At this stage, alerts typically existed, but they were not treated as critical.
Stage 2: Alerts fired, but context was missing
Once attackers authenticated successfully, secondary alerts often appeared:
- unusual login locations
- atypical login times
- token refresh activity
- access to unfamiliar cloud resources
Individually, these alerts were low to medium severity. In busy SOC environments, they blended into background noise.
Crucially, they were rarely correlated with the earlier phishing event. Without that connection, each signal looked explainable in isolation.
This is where alert fatigue shifted from an operational problem to a security failure.
Stage 3: Quiet internal reconnaissance
With valid access, attackers moved slowly.
They explored:
- email content and shared files
- cloud resources and permissions
- identity roles and trust relationships
Because actions were performed using legitimate accounts, activity often resembled normal user behaviour. Logging existed, but the signal-to-noise ratio was poor.
This phase frequently lasted days or weeks.
Stage 4: Privilege expansion and persistence
In many documented cases, attackers escalated by:
- creating new OAuth applications
- granting additional permissions
- registering new authentication methods
- establishing persistence through cloud configurations
At this point, the breach became systemic. Even if the original credentials were revoked, attackers retained alternative access paths.
This is where organisations typically realised the incident was no longer “just phishing”.
Stage 5: Impact and discovery
The breach was usually discovered only after:
- abnormal data access
- downstream system compromise
- third-party notification
- or external indicators of compromise
By then, response shifted from containment to recovery.
Post-incident reviews repeatedly showed that early alerts were present, but they were never assembled into a coherent attack narrative.
Why controls didn’t stop the breach
In these cases, security tooling largely worked as designed.
Email gateways flagged messages. Identity platforms logged anomalies. SIEMs generated alerts.
What failed was investigation under uncertainty.
Alerts were treated as individual events rather than pieces of a timeline. Analysts lacked the time, context, or confidence to escalate ambiguous signals early.
This is not a tooling gap. It is a skills and process gap.
What defenders can learn from this breach pattern
The key lesson is not that phishing is unstoppable, but that valid access changes everything.
Once attackers authenticate legitimately:
- detection becomes behavioural, not signature-based
- correlation matters more than severity scores
- investigation skill matters more than alert volume
Defenders who successfully stopped similar campaigns did so by:
- correlating identity, email, and cloud telemetry
- questioning “benign” anomalies when they clustered
- treating phishing as the start of an investigation, not the end of one
Mapping this breach to defensive practice
This breach path highlights why hands-on defensive training matters.
Skills that make a difference here include:
- alert triage with context
- identity and cloud log investigation
- timeline reconstruction
- escalation judgement under ambiguity
Practising these workflows in realistic scenarios, as on TryHackMe, helps analysts recognise when separate alerts form a single incident, before impact occurs.
Hack News takeaway
This breach did not succeed because alerts failed. It succeeded because alerts were not understood together.
Phishing remains effective not due to technical sophistication, but because it exploits the gap between detection and investigation.
As long as defenders treat alerts as isolated events rather than parts of a narrative, phishing campaigns will continue to turn into full breaches.
Nick O'Grady