Post Content
Welcome to the complete AI Red Teaming 101 series!
This beginner-friendly series covers the essential basics of AI red teaming, giving you the core concepts and techniques you need to understand generative AI security risks. Led by Dr. Amanda Minnich (Principal Research Manager, Microsoft AI Red Team), Gary Lopez (Principal Offensive AI Scientist, ADAPT), and Nina Chikanov (Security Software Engineer II, Microsoft AI Red Team), this series demystifies the critical security challenges facing modern AI deployments.
Whether you’re a security professional, ML practitioner, or just curious about AI risks, this complete series is designed to get you started—no PhD required! You’ll learn how Microsoft’s AI Red Team secures real-world AI systems like Copilots before release and get hands-on experience with cutting-edge tools and techniques.
From understanding how generative AI models work to trying out multi-turn attacks and exploring defense strategies, this series covers the core fundamentals of AI red teaming. You’ll explore real-world vulnerabilities like prompt injection and jailbreaks, learn to use Microsoft’s open-source PyRIT tool for automated testing, and discover proven mitigation strategies like Spotlighting that help secure AI applications in production.
What You’ll Learn:
The fundamentals of AI red teaming vs. traditional red teaming
How generative AI models are built and where vulnerabilities emerge
Direct and indirect prompt injection techniques with real-world examples
Advanced adversarial prompt engineering and manipulation tactics
Multi-turn attacks including Skeleton Key and Crescendo methodologies
Defensive strategies and Spotlighting techniques for robust AI security
How to automate red teaming workflows using PyRIT
Hands-on demonstrations of single-turn and multi-turn attack automation
Testing strategies for both text and image generation models
Ready to master AI red teaming? This complete series gives you the knowledge and tools to identify, exploit, and defend against the most critical vulnerabilities in generative AI systems.
✅ Chapters:
00:00 – Episode 1: What is AI Red Teaming? | AI Red Teaming 101
06:05 – Episode 2: How Generative AI Models Work (and Why It Matters) | AI Red Teaming 101
13:29 – Episode 3: Direct Prompt Injection Explained | AI Red Teaming 101
19:06 – Episode 4: Indirect Prompt Injection Explained | AI Red Teaming 101
25:25 – Episode 5: Prompt Injection Attacks – Single-Turn | AI Red Teaming 101
35:32 – Episode 6: Prompt Injection Attacks: Multi-Turn | AI Red Teaming 101
✅ Resources: https://aka.ms/airt101
42:59 – Episode 7: Defending Against Attacks: Mitigations and Guardrails | AI Red Teaming 101
49:53 – Episode 8: Automating AI Red Teaming with PyRIT | AI Red Teaming 101
57:55 – Episode 9: Automating Single-Turn Attacks with PyRIT | AI Red Teaming 101
01:06:26 – Episode 10: Automating Multi-Turn Attacks with PyRIT | AI Red Teaming 101 Read More Microsoft Developer