| CARVIEW |
Select Language
HTTP/2 301
content-type: text/html; charset=utf-8
location: /cloudsecurity/podcast/
x-cloud-trace-context: 7f4982213ff36d6b580d5a781a45e64a;o=1
date: Fri, 26 Dec 2025 21:01:46 GMT
server: Google Frontend
content-length: 0
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
HTTP/2 200
content-type: text/html; charset=utf-8
vary: Accept-Encoding
content-security-policy: media-src 'self' https://storage.googleapis.com/cloud-security-podcast/ *.libsyn.com https://www.youtube.com/ 'unsafe-inline' 'unsafe-eval'; font-src 'self' https://fonts.gstatic.com; base-uri 'none'; default-src 'self' 'unsafe-inline' 'unsafe-eval' https://www.youtube.com/ https://apis.google.com/js/gen_204 https://www.google.com/images/; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; frame-src 'self' https://feedback-pa.clients6.google.com/ https://www.youtube.com/; img-src 'self' https://storage.googleapis.com/cloud-security-podcast/ https://i.ytimg.com/ https://csi.gstatic.com https://www.google.com/images/ https://cloud.google.com/; object-src 'none'; script-src 'unsafe-inline' 'strict-dynamic' http: https: 'sha256-2eNcVUB/y+ah3UbIFL4oUZCIcxvEUi39u+vJJaa/x3Y='
strict-transport-security: max-age=60; includeSubDomains
x-content-type-options: nosniff
referrer-policy: same-origin
cross-origin-opener-policy: same-origin
x-frame-options: DENY
content-encoding: gzip
x-cloud-trace-context: 951468790edd8338580d5a781a45ed95
date: Fri, 26 Dec 2025 21:01:47 GMT
server: Google Frontend
content-length: 17424
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Cloud Security Podcast
Cloud Security Podcast
Join your hosts, Anton Chuvakin and Timothy Peacock, as they talk with industry experts about some of the most interesting areas of cloud security. If you like having threat models questioned and a few bad puns, please tune in!
Episode list
#256
December 15, 2025
EP256 Rewiring Democracy & Hacking Trust: Bruce Schneier on the AI Offense-Defense Balance
Topics:
Artificial IntelligenceTopics covered:
- Do you believe that AI is going to end up being a net improvement for defenders or attackers? Is short term vs long term different?
- We’re excited about the new book you have coming out with your co-author Nathan Sanders “Rewiring Democracy”. We want to ask the same question, but for society: do you think AI is going to end up helping the forces of liberal democracy, or the forces of corruption, illiberalism, and authoritarianism?
- If exploitation is always cheaper than patching (and attackers don’t follow as many rules and procedures), do we have a chance here?
- If this requires pervasive and fast “humanless” automatic patching (kinda like what Chrome does for years), will this ever work for most organizations?
- Do defenders have to do the same and just discover and fix issues faster? Or can we use AI somehow differently?
- Does this make defense in depth more important?
- How do you see AI as changing how society develops and maintains trust?
Resources:
- “Rewiring Democracy” book
- "Informacracy Trilogy" book
- Agentic AI’s OODA Loop Problem
- EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking
- AI and Trust
- AI and Data Integrity
- EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025
- RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check
#255
December 8, 2025
EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking
Guest:
- Heather Adkins, VP of Security Engineering, Google
Topics:
Artificial IntelligenceTopics covered:
- The term "AI Hacking Singularity" sounds like pure sci-fi, yet you and some other very credible folks are using it to describe an imminent threat. How much of this is hyperbole to shock the complacent, and how much is based on actual, observed capabilities today?
- Can autonomous AI agents really achieve that "exploit - at - machine - velocity" without human intervention for the zero-day discovery phase?
- On the other hand, why may it actually not happen?
- When we talk about autonomous AI attack platforms, are we talking about highly resourced nation-states and top-tier criminal groups, or will this capability truly be accessible to the average threat actor within the next 6-12 months? What's the "Metasploit" equivalent for AI-powered exploitation that will be ubiquitous?
- Can you paint a realistic picture of the worst-case scenario that autonomous AI hacking enables? Is it a complete breakdown of patch cycles, a global infrastructure collapse, or something worse?
- If attackers are operating at "machine speed," the human defender is fundamentally outmatched. Is there a genuine "AI-to-AI" counter-tactic that doesn't just devolve into an infinite arms race? Or can we counter without AI at all?
- Given that AI can expedite vulnerability discovery, how does this amplified threat vector impact the software supply chain? If a dependency is compromised within minutes of a new vulnerability being created, does this force the industry to completely abandon the open-source model, or does it demand a radical, real-time security scanning and patching system that only a handful of tech giants can afford?
- Are current proposed regulations, like those focusing on model safety or disclosure, even targeting the right problem?
- If the real danger is the combinatorial speed of autonomous attack agents, what simple, impactful policy change should world governments prioritize right now?
Resources:
- “Autonomous AI hacking and the future of cybersecurity” article
- EP20 Security Operations, Reliability, and Securing Google with Heather Adkins
- Introducing CodeMender: an AI agent for code security
- EP251 Beyond Fancy Scripts: Can AI Red Teaming Find Truly Novel Attacks?
- Daniel Miessler site and podcast
- “How SAIF can accelerate secure AI experiments” blog
- “Staying on top of AI Developments” blog
#254
December 1, 2025
EP254 Escaping 1990s Vulnerability Management: From Unauthenticated Scans to AI-Driven Mitigation
Guest:
- Caleb Hoch, Consulting Manager on Security Transformation Team, Mandiant, Google Cloud
Topics:
Vulnerability ManagementTopics covered:
- How has vulnerability management (VM) evolved beyond basic scanning and reporting, and what are the biggest gaps between modern practices and what organizations are actually doing?
- Why are so many organizations stuck with 1990s VM practices?
- Why mitigation planning is still hard for so many?
- Why do many organizations, including large ones, still rely on unauthenticated scans despite the known importance of authenticated scanning for accurate results?
- What constitutes a "gold standard" vulnerability prioritization process in 2025 that moves beyond CVSS scores to incorporate threat intelligence, asset criticality, and other contextual factors?
- What are the primary human and organizational challenges in vulnerability management, and how can issues like unclear governance, lack of accountability, and fear of system crashes be overcome?
- How is AI impacting vulnerability management, and does the shift to cloud environments fundamentally change VM practices?
Resources:
- EP109 How Google Does Vulnerability Management: The Not So Secret Secrets!
- EP246 From Scanners to AI: 25 Years of Vulnerability Management with Qualys CEO Sumedh Thakar
- EP248 Cloud IR Tabletop Wins: How to Stop Playing Security Theater and Start Practicing
- How Low Can You Go? An Analysis of 2023 Time-to-Exploit Trends
- Mandiant M Trends 2025
- EP204 Beyond PCAST: Phil Venables on the Future of Resilience and Leading Indicators
- Mandiant Vulnerability Management
#253
November 24, 2025
EP253 The Craft of Cloud Bug Hunting: Writing Winning Reports and Secrets from a VRP Champion
Guests:
- Sivanesh Ashok, bug bounty hunter
- Sreeram KL, bug bounty hunter
Topics covered:
- We hear from the Cloud VRP team that you write excellent bug bounty reports - is there any advice you'd give to other researchers when they write reports?
- You are one of Cloud VRP's top researchers and won the MVH (Most Valuable Hacker) award at their event. What do you think makes you so successful at finding issues?
- What is a Bugswat?
- What do you find most enjoyable and least enjoyable about the VRP?
- What is the single best piece of advice you'd give an aspiring cloud bug hunter today?
#252
November 17, 2025
EP252 The Agentic SOC Reality: Governing AI Agents, Data Fidelity, and Measuring Success
Guests:
- Alexander Pabst, Deputy Group CISO, Allianz
- Lars Koenig, Global Head of D&R, Allianz
Topics covered:
- Moving from traditional SIEM to an agentic SOC model, especially in a heavily regulated insurer, is a massive undertaking. What did the collaboration model with your vendor look like?
- Agentic AI introduces a new layer of risk - that of unconstrained or unintended autonomous action. In the context of Allianz, how did you establish the governance framework for the SOC alert triage agents?
- Where did you draw the line between fully automated action and the mandatory "human-in-the-loop" for investigation or response?
- Agentic triage is only as good as the data it analyzes. From your perspective, what were the biggest challenges - and wins - in ensuring the data fidelity, freshness, and completeness in your SIEM to fuel reliable agent decisions?
- We've been talking about SOC automation for years, but this agentic wave feels different. As a deputy CISO, what was your primary, non-negotiable goal for the agent? Was it purely Mean Time to Respond (MTTR) reduction, or was the bigger strategic prize to fundamentally re-skill and uplevel your Tier 2/3 analysts by removing the low-value alert noise?
- As you built this out, were there any surprises along the way that left you shaking your head or laughing at the unexpected AI behaviors?
- We felt a major lack of proof - Anton kept asking for pudding - that any of the agentic SOC vendors we saw at RSA had actually achieved anything beyond hype! When it comes to your org, how are you measuring agent success? What are the key metrics you are using right now?
Resources:
- EP238 Google Lessons for Using AI Agents for Securing Our Enterprise
- EP242 The AI SOC: Is This The Automation We've Been Waiting For?
- EP249 Data First: What Really Makes Your SOC 'AI Ready'?
- EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI
- “Simple to Ask: Is Your SOC AI Ready? Not Simple to Answer!” blog
- “How Google Does It: Building AI agents for cybersecurity and defense” blog
- Company annual report to look for risk
- “How to Win Friends and Influence People” by Dale Carnegie
- “Will It Make the Boat Go Faster?” book
#251
November 10, 2025
EP251 Beyond Fancy Scripts: Can AI Red Teaming Find Truly Novel Attacks?
Guest:
- Ari Herbert-Voss, CEO at RunSybil
Topics:
Artificial IntelligenceTopics covered:
- The market already has Breach and Attack Simulation (BAS) for testing known TTPs. You’re calling this 'AI-powered' red teaming. Is this just a fancy LLM stringing together known attacks, or is there a genuine agent here that can discover a truly novel attack path that a human hasn't scripted for it?
- Let's talk about the 'so what?' problem. Pentest reports are famous for becoming shelf-ware. How do you turn a complex AI finding into an actionable ticket for a developer, and more importantly, how do you help a CISO decide which of the thousand 'criticals' to actually fix first?
- You're asking customers to unleash a 'hacker AI' in their production environment. That’s terrifying. What are the 'do no harm' guardrails? How do you guarantee your AI won't accidentally rm -rf a critical server or cause a denial of service while it's 'exploring'?
- You mentioned the AI is particularly good at finding authentication bugs. Why that specific category? What's the secret sauce there, and what's the reaction from customers when you show them those types of flaws?
- Is this AI meant to replace a human red teamer, or make them better? Does it automate the boring stuff so experts can focus on creative business logic attacks, or is the ultimate goal to automate the entire red team function away?
- So, is this just about finding holes, or are you closing the loop for the blue team? Can the attack paths your AI finds be automatically translated into high-fidelity detection rules? Is the end goal a continuous 'purple team engine' that’s constantly training our defenses?
- Also, what about fixing? What makes your findings more fixable?
- What will happen to red team testing in 2-3 years if this technology gets better?
Resources:
#250
November 3, 2025
EP250 The End of "Collect Everything"? Moving from Centralization to Data Access?
Guest:
- Balazs Scheidler, CEO at Axoflow, original founder of syslog-ng
Topics:
SIEM and SOCTopics covered:
- Are we really coming to “access to security data” and away from “centralizing the data”?
- How to detect without the same storage for all logs?
- Is data pipeline a part of SIEM or is it standalone? Will this just collapse into SIEM soon?
- Tell us about the issues with log pipelines in the past?
- What about enrichment? Why do it in a pipeline, and not in a SIEM?
- We are unable to share enough practices between security teams. How are we fixing it? Is pipelines part of the answer?
- Do you have a piece of advice for people who want to do more than save on their SIEM costs?
Resources:
- EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective
- EP190 Unraveling the Security Data Fabric: Need, Benefits, and Futures
- EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines
- Axoflow podcast and Anton on it
- “Decoupled SIEM: Where I Think We Are Now?” blog
- “Decoupled SIEM: Brilliant or Stupid?” blog
- “Output-driven SIEM — 13 years later” blog
#249
October 27, 2025
EP249 Data First: What Really Makes Your SOC 'AI Ready'?
Guest:
- Monzy Merza, co-founder and CEO at Crogl
Topics:
SIEM and SOCTopics covered:
- We often hear about the aspirational idea of an "IronMan suit" for the SOC—a system that empowers analysts to be faster and more effective. What does this ideal future of security operations look like from your perspective, and what are the primary obstacles preventing SOCs from achieving it today?
- You've also raised a metaphor of AI in the SOC as a "Dr. Jekyll and Mr. Hyde" situation. Could you walk us through what you see as the "Jekyll"—the noble, beneficial promise of AI—and what are the factors that can turn it into the dangerous "Mr. Hyde"?
- Let's drill down into the heart of the "Mr. Hyde" problem: the data. Many believe that AI can fix a team's messy data, but you've noted that "it's all about the data, duh." What's the story?
- “AI ready SOC” - What is the foundational work a SOC needs to do to ensure their data is AI-ready, and what happens when they skip this step?
- And is there anything we can do to use AI to help with this foundational problem?
- How do we measure progress towards AI SOC? What gets better at what time? How would we know?
- What SOC metrics will show improvement? Will anything get worse?
Resources:
- EP242 The AI SOC: Is This The Automation We've Been Waiting For?
- EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC
- EP227 AI-Native MDR: Betting on the Future of Security Operations?
- EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI
- EP238 Google Lessons for Using AI Agents for Securing Our Enterprise
- "Simple to Ask: Is Your SOC AI Ready? Not Simple to Answer!" blog
- Nassim Taleb “Antifragile” book
- “AI Superpowers” book
- “Attention Is All You Need” paper
#248
October 20, 2025
EP248 Cloud IR Tabletop Wins: How to Stop Playing Security Theater and Start Practicing
Topics:
Cloud IR and ForensicsTopics covered:
- What is this tabletop thing, please tell us about running a good security incident tabletop?
- Why are tabletops for incident response preparedness so amazingly effective yet rarely done well?
- This is cheap/easy/useful so why do so many fail to do it? Why are tabletops seen as kind of like elite pursuit?
- What’s your favorite Cloud-centric scenario for tabletop exercises? Ransomware? But there is little ransomware in the cloud, no?
- What are other good cloud tabletop scenarios?
Resources:
- EP60 Impersonating Service Accounts in GCP and Beyond: Cloud Security Is About IAM?
- EP179 Teamwork Under Stress: Expedition Behavior in Cybersecurity Incident Response
- EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends
- EP177 Cloud Incident Confessions: Top 5 Mistakes Leading to Breaches from Mandiant
- EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics
- EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster?
#247
October 13, 2025
EP247 The Evolving CISO: From Security Cop to Cloud & AI Champion
Guest:
- David Gee, Board Risk Advisor, Non-Executive Director & Author, former CISO
Topics:
CISOTopics covered:
- Drawing from the "Aspiring CIO and CISO" book's focus on continuous improvement, how have you seen the necessary skills, knowledge, experience, and behaviors for a CISO evolve, especially when guiding an organization through a transformation?
- Could you share lessons learned about leadership and organizational resilience during such a critical period, and how does that experience reshape your approach to future transformations?
- Many organizations are undergoing transformations, often heavily involving cloud technologies. From your perspective, what is the most crucial—and perhaps often overlooked—role a CISO plays in ensuring security is an enabler, not a roadblock, during such large-scale changes?
- Have you ever seen a CISO who is a cloud champion for the organization?
- Your best advice for a CISO meeting cloud for the first time?
- What is your best advice for a CISO meeting AI for the first time?
- How do you balance the continuous self-improvement and development with the day-to-day pressures and responsibilities?
Resources:
- “A Day in the Life of a CISO: Personal Mentorship from 24+ Battle-Tested CISOs — Mentoring We Never Got” book
- “The Aspiring CIO and CISO: A career guide to developing leadership skills, knowledge, experience, and behavior” book
- EP201 Every CTO Should Be a CSTO (Or Else!) - Transformation Lessons from The Hoff
- EP101 Cloud Threat Detection Lessons from a CISO
- EP104 CISO Walks Into the Cloud: And The Magic Starts to Happen!
- EP129 How CISO Cloud Dreams and Realities Collide
- All CISO podcast episodes
- “Shadow Agents: A New Era of Shadow AI Risk in the Enterprise” blog
- “Blocking shadow agents won’t work. Here’s a more secure way forward” blog