Episode 1: What is AI Red Teaming? | AI Red Teaming 101 with Amanda and Gary

Estimated read time 2 min read

Post Content

​ Welcome to AI Red Teaming 101!

In this kickoff episode of this learning series, Dr. Amanda Minnich (Principal Research Manager, Microsoft AI Red Team) and Gary Lopez (Principal Offensive AI Scientist, ADAPT) introduce the fundamentals of AI red teaming. Whether you’re a security professional, ML practitioner, or just curious about the risks of generative AI, this course is designed to get you started—no PhD required!

They walk through the origins of AI red teaming, key risks like prompt injection and model misalignment, and how Microsoft’s AI Red Team helps secure Copilots and AI models before release.

What You’ll Learn:

The difference between traditional and AI red teaming
Core risks in generative AI systems (fabrication, alignment gaps, prompt injection)
How Microsoft’s AI Red Team secures real-world AI deployments

✅ Chapters:
00:00 – Welcome to AI Red Teaming 101
00:19 – Who this course is for
00:36 – What you’ll learn in this series
01:00 – Open-sourced labs and hands-on techniques
01:34 – Meet Dr. Amanda Minnich
02:01 – Meet Gary Lopez
02:46 – What is AI red teaming?
03:36 – Key risks in generative AI
04:45 – Microsoft’s AI Red Team mission
05:48 – What’s next in the series

✅ Links & Resources:

AI Red Teaming 101 Episodes: aka.ms/AIRT101
AI Red Teaming 101 Labs & Tools: aka.ms/AIRTlabs
Microsoft AI Red Team Overview: aka.ms/airedteam

✅ Speakers:
Amanda Minnich – Principal Research Manager, Microsoft AI Red Team
LinkedIn: https://www.linkedin.com/in/amandajeanminnich/

Webpage: https://www.amandaminnich.info/

Gary Lopez – Principal Offensive AI Scientist, ADAPT
LinkedIn: https://www.linkedin.com/in/gary-lopez/

#AIRedTeam #AIRT #Microsoft #AI #AISecurity #AIRedTeaming #GenerativeAI #Cybersecurity #InfoSec #cybersecurityawareness   Read More Microsoft Developer 

You May Also Like

More From Author