
Artificial intelligence is becoming central to cybersecurity — powering anomaly detection, automating response times, and predicting threats before they happen. But as we saw in the massive cyber attack on Transport for London (TfL), even the best defences can be breached — and the consequences can be staggering.
Between August and late 2024, cybercriminal group Scattered Spider launched an attack that cost TfL an estimated £39m, disrupted online services for three months, and forced 25,000 employees to verify their identities in person across the capital.
The hackers accessed sensitive customer data — including bank account details — and left many of TfL's online systems offline. Two teenagers, just 18 and 19 years old, were later arrested and charged under the Computer Misuse Act, with evidence linking one of them to attacks on US healthcare providers.
This wasn't an attack on just any company — TfL is part of the UK's critical national infrastructure. The scale of disruption shows how fragile digital systems can be, even for major public bodies with significant cyber defences.
So where does AI fit in — as saviour, or as a new vulnerability?
The Promise: AI as a Force Multiplier in Cyber Defence
AI is already changing how we defend critical infrastructure:
- Threat Detection at Scale: Machine learning can sift through millions of logs per second, flagging anomalies and correlating patterns humans might miss.
- Automated Containment: AI-driven SOAR (Security Orchestration, Automation, and Response) platforms can isolate compromised systems within seconds, buying crucial time.
- Predictive Modelling: Generative AI can simulate potential attack vectors before they occur, enabling proactive hardening of systems.
- Identity Protection: Behavioural AI can identify account takeovers based on micro-patterns of user activity, even if credentials are valid.
If TfL had AI-powered threat hunting running continuously, it might have detected irregular behaviour earlier and contained the breach faster — potentially saving months of disruption.
The Risk: When AI Becomes the Target
But AI is not invulnerable — it can actually expand the attack surface:
- Model Poisoning: Adversaries can corrupt training data, tricking AI systems into misclassifying malicious behaviour as benign.
- Adversarial Examples: Carefully crafted inputs can cause AI systems to "look away" at the crucial moment — exactly what attackers want.
- Overreliance Risk: AI can create a false sense of security. If the AI fails, or is manipulated, the entire system may collapse.
- Explainability Gaps: When things go wrong, black-box models make it difficult to trace root causes — delaying remediation.
The TfL case is a reminder that sophisticated attackers are increasingly targeting the systems we rely on to defend us. Imagine if the very AI tools we trust to keep infrastructure safe were compromised — attackers could disable alarms, reroute traffic data, or selectively erase logs to cover their tracks.
Building AI-Ready Cyber Defences
Business leaders should think of AI in cybersecurity as a double-edged sword. To use it responsibly:
- Treat AI as Critical Infrastructure: Apply the same rigour to AI model security as you would to network firewalls or access controls.
- Implement Human-in-the-Loop Oversight: Pair AI detection with experienced security analysts to verify alerts and investigate anomalies.
- Harden Your Supply Chain: Ensure your AI models, data sources, and APIs are secured from tampering.
- Run Adversarial Testing: Regularly stress-test AI systems with simulated attacks to discover weaknesses before real attackers do.
- Plan for AI-Specific Incident Response: Include model rollback, retraining, and data cleansing as part of your playbooks.
The Bigger Picture: The Talent & Regulation Gap
The TfL hack also reveals another critical gap: skills. AI-powered security systems are only as strong as the experts who train, tune, and monitor them. Demand for cybersecurity professionals with AI expertise far outstrips supply, leaving many organisations underprepared.
Regulators are starting to take notice — the EU AI Act and the UK's AI Safety Institute are beginning to set guardrails. But law moves slower than cybercriminals. Until regulation catches up, private sector leaders will need to set their own standards for secure AI adoption.
Conclusion: Learning from TfL
The TfL attack is a cautionary tale for every organisation: no one is too big or too prepared to be targeted. AI offers powerful tools to defend against such threats — but only if implemented with robust governance, transparency, and human oversight.
If we fail to secure AI, we risk creating a future where the very technology we depend on to protect us becomes the ultimate vulnerability.
Is your organisation building AI into your cybersecurity strategy in a way that is transparent, testable, and resilient? Or are you relying on black boxes that could be turned against you?
Topics
Need guidance on AI governance?
If you're navigating AI ethics, governance challenges, or regulatory compliance, we can help clarify priorities and next steps.
Book a Readiness Consultation