| CARVIEW |
Select Language
HTTP/2 301
content-type: text/html; charset=utf-8
location: /cloudsecurity/podcast/ep68-how-we-attack-ai-learn-more-at-our-rsa-panel/
x-cloud-trace-context: 7eaa5c151444eb22762d50d51055d913;o=1
date: Sat, 03 Jan 2026 16:11:36 GMT
server: Google Frontend
content-length: 0
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
HTTP/2 200
content-type: text/html; charset=utf-8
vary: Accept-Encoding
content-security-policy: img-src 'self' https://storage.googleapis.com/cloud-security-podcast/ https://i.ytimg.com/ https://csi.gstatic.com https://www.google.com/images/ https://cloud.google.com/; default-src 'self' 'unsafe-inline' 'unsafe-eval' https://www.youtube.com/ https://apis.google.com/js/gen_204 https://www.google.com/images/; script-src 'unsafe-inline' 'strict-dynamic' http: https: 'sha256-2eNcVUB/y+ah3UbIFL4oUZCIcxvEUi39u+vJJaa/x3Y='; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; font-src 'self' https://fonts.gstatic.com; media-src 'self' https://storage.googleapis.com/cloud-security-podcast/ *.libsyn.com https://www.youtube.com/ 'unsafe-inline' 'unsafe-eval'; base-uri 'none'; frame-src 'self' https://feedback-pa.clients6.google.com/ https://www.youtube.com/; object-src 'none'
strict-transport-security: max-age=60; includeSubDomains
x-content-type-options: nosniff
referrer-policy: same-origin
cross-origin-opener-policy: same-origin
x-frame-options: DENY
content-encoding: gzip
x-cloud-trace-context: 75f384ad5ef8f154762d50d51055d0ab
date: Sat, 03 Jan 2026 16:11:36 GMT
server: Google Frontend
content-length: 3589
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
EP68 How We Attack AI? Learn More at Our RSA Panel!
#68
View more episodes
June 6, 2022
EP68 How We Attack AI? Learn More at Our RSA Panel!
Guest:
- Nicholas Carlini, Research Scientist @ Google
Topics covered:
- What is your threat model for a large-scale AI system? How do you approach this problem? How do you rank the attacks?
- How do you judge if an attack is something to mitigate? How do you separate realistic from theoretical?
- Are there AI threats that were theoretical in 2020, but may become a daily occurrence in 2025?
- What are the threat-derived lessons for securing AI?
- Do we practice the same or different approaches for secure AI and reliable AI?
- How does relative lack of transparency in AI helps (or hurts?) attackers and defenders?
Do you have something cool to share? Some questions? Let us know:
- language Web cloud.withgoogle.com/cloudsecurity/podcast
- mail_outline Mail cloudsecuritypodcast@google.com
-
Twitter
@CloudSecPodcast
- Download podcast