Episode 2: How Generative AI Models Work (and Why It Matters) | AI Red Teaming 101

Estimated read time 2 min read

Post Content

​ Welcome back to AI Red Teaming 101!

In this episode, Dr. Amanda Minnich from Microsoft’s AI Red Team breaks down how generative AI models are built and why their architecture introduces unique security risks. From tokenization to transformers, you’ll learn how these systems generate language, where vulnerabilities emerge, and how red teamers exploit them.

This foundational knowledge sets the stage for understanding real-world attacks like prompt injection, which we’ll explore in the next episode.

What You’ll Learn:

How generative AI models are trained and structured
Why tokenization and embeddings matter for security
How model architecture creates new attack surfaces

✅ Chapters:
00:00 – Welcome back to AI Red Teaming 101
00:14 – What we covered in Episode 1
00:23 – Why understanding model architecture matters
00:36 – What makes generative AI different
01:00 – How generative models generate
01:34 – Large vs. small language models
02:06 – Training stages: pre-training, post-training, red teaming
03:20 – Tokenization explained
04:00 – Embeddings and vector space
04:40 – Transformers and attention
05:30 – Why model context is both powerful and risky
06:00 – What’s next: prompt injection attacks

✅ Links & Resources:
AI Red Teaming 101 Episodes: aka.ms/airt101
AI Red Teaming 101 Labs & Tools: aka.ms/airtlabs
Microsoft AI Red Team Overview: aka.ms/airedteam

✅ Speakers:
Amanda Minnich – Principal Research Manager, Microsoft AI Red Team
LinkedIn: https://www.linkedin.com/in/amandajeanminnich/

Webpage: https://www.amandaminnich.info/

Gary Lopez – Principal Offensive AI Scientist, ADAPT
LinkedIn: https://www.linkedin.com/in/gary-lopez/

#AIRedTeam #AIRT #Microsoft #AI #AISecurity #AIRedTeaming #GenerativeAI #Cybersecurity #InfoSec #cybersecurityawareness #LLMs   Read More Microsoft Developer 

You May Also Like

More From Author