Armchair Architects: The Danger Zone (Part 1)

Estimated read time 1 min read

Post Content

​ In this two-part episode of the #AzureEnablementShow, David talks to our #ArmchairArchitects—Uli and Eric (@mougue)— about the ethical and responsible considerations for architects should consider when using large language models (LLMs) in their applications, including the risks of leaking confidential information, societal biases, and increasing regulations.

Resources
• Confidential AI https://learn.microsoft.com/azure/confidential-computing/confidential-ai
• Responsible and trusted AI https://learn.microsoft.com/azure/cloud-adoption-framework/innovate/best-practices/trusted-ai
• Azure confidential computing partners https://learn.microsoft.comazure/confidential-computing/partner-pages/partner-pages-index
• Confidential Computing on Azure https://learn.microsoft.com/azure/confidential-computing/overview-azure-products

Related Episodes
• Watch more episodes in the Armchair Architects Series https://aka.ms/azenable/ArmchairArchitects
• Watch more episodes in the Well-Architected Series https://aka.ms/azenable/yt/wa-playlist
• Check out the Azure Enablement Show https://aka.ms/AzureEnablementShow

Chapters
0:00 Introduction
0:53 Thinking about Confidentiality
1:22 AI and ML vs. LLMS
2:14 Transparency in algorithms
2:35 Explainable AI
4:16 Picking a model and partner
6:48 Explainability or Observability
7:48 Create model using JSON
9:50 To be continued …   Read More Microsoft Developer 

You May Also Like

More From Author

+ There are no comments

Add yours