Is your LLM-powered app safe? Evaluate it! | DEM522

Estimated read time 2 min read

Post Content

​ Join us for a live demo where we’ll show you how to add safety evaluations to apps built on LLMs! Using the powerful Azure AI Evaluation SDK, we’ll demonstrate how to make sure your app responses are safe and free from harmful content. We’ll walk you through the process step-by-step—from setting up your Azure AI Project to simulating app responses with the AdversarialSimulator, and using the ContentSafetyEvaluator to catch any issues.

To learn more, please check out these resources:
* https://aka.ms/build25/plan/ADAI_DevsAdvPlan

𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀:
* Pamela Fox

𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻:
This is one of many sessions from the Microsoft Build 2025 event. View even more sessions on-demand and learn about Microsoft Build at https://build.microsoft.com

DEM522 | English (US) | AI, Copilot & Agents

#MSBuild

Chapters:
0:00 – Discussion on LM Powered Apps
00:00:56 – A Basic Application Demonstration on Data Retrieval
00:01:28 – Exploring Content Filters and Their Efficacy
00:04:23 – Introduction of Automated Red Teaming by Azure AI
00:05:36 – Configuration of Red Team Class Parameters
00:07:27 – Moderate Complexity Transformation Example
00:09:33 – Safety Features of GP4-O Mini Model on Azure
00:13:21 – Exploring Content Safety with URL Escaping Attack Results
00:13:50 – Documentations and QR Code for Further Learning   Read More Microsoft Developer 

You May Also Like

More From Author