CARVIEW |
Red Teaming AI
AI is no longer a futuristic concept—it’s embedded in critical systems shaping finance, healthcare, infrastructure, and national security. But with this power comes unprecedented risk. Red Teaming AI arms you with the mindset, methodology, and tools to proactively test and secure intelligent systems before real adversaries exploit them.
Written for security professionals, researchers, and AI practitioners, this field manual goes beyond theory. You’ll learn how to map the new AI attack surface, anticipate adversarial moves, and simulate real-world threats to uncover hidden vulnerabilities.
You’ll Learn How To:
- Think in graphs, not checklists: trace attack paths through interconnected AI components, data pipelines, and human interactions
- Poison the well: explore how adversaries corrupt training data to implant backdoors and erode model integrity
- Fool the oracle: craft evasion attacks that manipulate AI perception at decision time
- Hijack conversations: execute prompt injection attacks that turn Large Language Models into insider threats
- Steal the brain: probe for model extraction and privacy attacks that compromise valuable IP
- Conduct full-spectrum campaigns: use the STRATEGEMS framework and the AI Kill Graph to plan, execute, and report professional-grade red team engagements
Traditional security methods can’t keep up with adversarial AI. From manipulated financial agents to compromised autonomous vehicles, real-world failures have already caused billions in losses and threatened lives. Red Teaming AI equips you to meet this challenge with practical techniques grounded in real attack scenarios and cutting-edge research.
Philip A. Dursey is a three-time AI founder, cybersecurity architect, engineer, and former Chief Information Security Officer (CISO). He is the founder and CEO of HYPERGAME, a venture-backed innovator pioneering autonomous cyber defense technologies and advanced AI red team tooling. With nearly two decades of hands-on experience securing AI-native infrastructure across critical industries, national security environments, and frontier technology sectors, Philip is globally recognized as an expert in adversarial machine learning, large language model security, and autonomous agent resilience.
Introduction
Part I: The Adversarial Playbook: Mindset & Methodology
Chapter 1: The New Attack Surface: Thinking in Graphs
Chapter 2: The Engagement: An AI Red Teamer's Methodology
Part II: The AI Kill Graph: Core Attack Techniques
Chapter 3: Reconnaissance: Mapping the AI Terrain
Chapter 4: Poisoning the Well: Corrupting AI Data
Chapter 5: Fooling the Oracle: Evasive Attacks at Inference
Chapter 6: Hijacking the Conversation: LLM Prompt Injection
Chapter 7: Seizing Control: Agentic System Exploitation
Chapter 8: Stealing the Brain: Model Extraction and Privacy Attacks
Part III: The Campaign: Execution & Impact
Chapter 9: Graphs of Pain: Advanced Attack Sequences
Chapter 10: The Endgame: Reporting for Maximum Impact
Chapter 11: The Next Frontier: The Future of AI Red Teaming
References
The chapters in red are included in this Early Access PDF.