Post Content
โย To keep pace with evolving AI risks, organizations need tools to effectively test their AI systems, simulate adversarial attacks, and uncover weaknesses before bad actors can exploit them. Learn how Azure AI Foundry and PyRIT can help your organization automate parts of the AI red teaming process, so you can scale up testing efforts and improve AI security and safety across use cases.
To learn more, please check out these resources:
* https://aka.ms/build25/github/DEM552
๐ฆ๐ฝ๐ฒ๐ฎ๐ธ๐ฒ๐ฟ๐:
* Nagkumar Arkalgud
* Nayan Paul
* Minsoo Thigpen
๐ฆ๐ฒ๐๐๐ถ๐ผ๐ป ๐๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป:
This is one of many sessions from the Microsoft Build 2025 event. View even more sessions on-demand and learn about Microsoft Build at https://build.microsoft.com
DEM552 | English (US) | Security
#MSBuild, #Security
Chapters:
0:00 – Introduction to the session with Minsu and team
00:00:10 – Introduction of speakers including representatives from Accenture
00:04:56 – Explaining the AI Application Security Measures
00:06:11 – Providing Comprehensive Safety Infrastructure and Red Teaming Scorecard
00:07:33 – Running the Red Team Plugin to Enhance Application Security
00:07:58 – Overview of Automatic Conversation List
00:10:02 – Possibility of an Interactive Session with AI
00:13:32 – Adjusting Attack Mitigations using Azure AI Foundry’s Guard Rails
00:14:31 – Conclusion and Resources for AI Developmentย ย ย Read Moreย Microsoft Developerย