Post Content
In this second part of a two-part episode on the dangers of AI and how to deal them in the context of large language models (LLMs), David is joined by our #ArmchairArchitects, Uli and Eric (@mougue) for a conversation that touches on how LLMs are different from traditional models, the challenge of trusting the model architecture, ethical considerations including bias, privacy, governance, and accountability, plus a look at some practical steps, such as reporting, analytics, and data visualization.
Be sure to watch Armchair Architects: The Danger Zone (Part 1) https://aka.ms/azenable/146 before watching this episode of the #AzureEnablementShow.
Resources
• Microsoft Azure AI Fundamentals: Generative AI https://learn.microsoft.com/training/paths/introduction-generative-ai/
• Responsible and trusted AI https://learn.microsoft.com/azure/cloud-adoption-framework/innovate/best-practices/trusted-ai
• Architectural approaches for AI and ML in multitenant solutions https://learn.microsoft.com/azure/architecture/guide/multitenant/approaches/ai-ml
• Training: AI engineer https://learn.microsoft.com/credentials/certifications/roles/ai-engineer
• Responsible use of AI with Azure AI services https://learn.microsoft.com/azure/ai-services/responsible-use-of-ai-overview
Related Episodes
• Armchair Architects: The Danger Zone (Part 1) https://aka.ms/azenable/146
• Watch more episodes in the Armchair Architects Series https://aka.ms/azenable/ArmchairArchitects
• Watch more episodes in the Well-Architected Series https://aka.ms/azenable/yt/wa-playlist
• Check out the Azure Enablement Show https://aka.ms/AzureEnablementShow
Chapters
0:00 Introduction
0:18 Open questions from Part 1
2:15 Practical tips to get started
3:14 LLMs are not traditional models
3:33 Financial services approach
5:06 Commoditization of Generative AI
6:45 Feedback for Prompt tuning
9:10 Content safety
12:10 Teaser for next episodes Read More Microsoft Developer
+ There are no comments
Add yours