Post Content
To keep pace with evolving AI risks, organizations need tools to effectively test their AI systems, simulate adversarial attacks, and uncover weaknesses before bad actors can exploit them. Learn how Azure AI Foundry and PyRIT can help your organization automate parts of the AI red teaming process, so you can scale up testing efforts and improve AI security and safety across use cases.
To learn more, please check out these resources:
* https://aka.ms/build25/github/DEM552
𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀:
* Nagkumar Arkalgud
* Nayan Paul
* Minsoo Thigpen
𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻:
This is one of many sessions from the Microsoft Build 2025 event. View even more sessions on-demand and learn about Microsoft Build at https://build.microsoft.com
DEM552 | English (US) | Security
#MSBuild, #Security
Chapters:
0:00 – Introduction to the session with Minsu and team
00:00:10 – Introduction of speakers including representatives from Accenture
00:04:56 – Explaining the AI Application Security Measures
00:06:11 – Providing Comprehensive Safety Infrastructure and Red Teaming Scorecard
00:07:33 – Running the Red Team Plugin to Enhance Application Security
00:07:58 – Overview of Automatic Conversation List
00:10:02 – Possibility of an Interactive Session with AI
00:13:32 – Adjusting Attack Mitigations using Azure AI Foundry’s Guard Rails
00:14:31 – Conclusion and Resources for AI Development Read More Microsoft Developer