Episode 6: Prompt Injection Attacks: Multi-Turn | AI Red Teaming 101

Estimated read time 2 min read

Post Content

​ Welcome back to AI Red Teaming 101!

In this episode, Gary Lopez, Principal Offensive AI Scientist on Microsoft’s ADAPT team, explores multi-turn attacks—adversarial techniques that steer models into bypassing safety protections. You’ll learn about two advanced approaches: Skeleton Key, which overrides safety instructions, and Crescendo, which incrementally guides models toward harmful outputs.

Gary walks through real-world examples and demonstrates how these attacks are executed in Microsoft’s red teaming labs, showing how even the most advanced models can be manipulated with persistent, well-structured inputs.

What You’ll Learn:

How multi-turn attacks like Skeleton Key and Crescendo work
Why gradual input manipulation can bypass model safety
How to simulate these attacks using Microsoft’s AI red teaming labs

✅ Chapters:
00:00 – Welcome & episode overview
00:20 – What are multi-turn attacks?
01:00 – Skeleton Key attack explained
02:00 – Crescendo attack strategy
04:00 – Real-world example: historical context to harmful output
05:30 – Demo: Crescendo in action
07:00 – Key takeaways & next steps

✅ Links & Resources:
AI Red Teaming 101 Episodes: aka.ms/airt101
AI Red Teaming 101 Labs & Tools: aka.ms/airtlabs
Microsoft AI Red Team Overview: aka.ms/airedteam

✅ Speakers:
Amanda Minnich – Principal Research Manager, Microsoft AI Red Team
LinkedIn: https://www.linkedin.com/in/amandajeanminnich/

Webpage: https://www.amandaminnich.info/

Gary Lopez – Principal Offensive AI Scientist, ADAPT
LinkedIn: https://www.linkedin.com/in/gary-lopez/

#AIRedTeam #AIRT #Microsoft #AI #AISecurity #AIRedTeaming #GenerativeAI #Cybersecurity #InfoSec #cybersecurityawareness #PromptInjection #CrescendoAttack   Read More Microsoft Developer 

You May Also Like

More From Author