Post Content
Learn how evaluation and continuous monitoring can help you iterate quickly and move from pilot to production faster. Discover how to kickstart Agent development with Azure AI Foundry’s evaluations and rapidly iterate using integrated CI/CD workflows. Monitor your AI applications and Agents with out-of-the-box continuous monitoring tools for evaluation and tracing. You’ll leave with new tools and best practices to scale your AI applications.
To learn more, please check out these resources:
* https://aka.ms/build25/plan/ManageGenAILifecycles
𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀:
* Mehrnoosh Sameki
* Justin Hofer
* Akanksha Kumari
* Sebastian Kohlmeier
𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻:
This is one of many sessions from the Microsoft Build 2025 event. View even more sessions on-demand and learn about Microsoft Build at https://build.microsoft.com
BRK168 | English (US) | AI, Copilot & Agents
Related Sessions:
DEM547 — https://build.microsoft.com/sessions/DEM547?wt.mc_id=yt_PLlrxD0HtieHg4OIwuQGsVgpQhTKl4fFdE
#MSBuild
Chapters:
0:00 – Mission on Agentic Observability with Azure AI Foundry
00:05:03 – Custom Evaluator Framework and Introduction to Contoso Outdoor Support Agent
00:16:02 – AI Red Teaming Agent for Vulnerability Testing
00:19:02 – Transition and Introduction to Continuous Integration Workflow
00:26:45 – Introducing the Testing Scenario
00:27:34 – Interest in Evaluations Benchmark
00:42:02 – Introduction of Azure Monitor Product Manager
00:44:00 – Announcement of MCP Conventions Read More Microsoft Developer