Post Content
In this episode of the #AzureEnablementShow, Uli, Eric and David continue their discussion of vector databases and LLMS, including when to use prompt engineering, and the importance of fine-tuning your data. Uli suggests that there are two things that LLMs aren’t good at, then offers tips on workarounds. The conversation wraps up with a discussion of some of the pros and cons of vectorization. This is part two of a two-part series.
Resources
• Submit a training job in Studio https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-with-ui?view=azureml-api-2
• Artificial intelligence (AI) architecture design https://learn.microsoft.com/en-us/azure/architecture/ai-ml/
• Prepare for AI engineering https://learn.microsoft.com/en-us/training/paths/prepare-for-ai-engineering/
• Fundamentals of Generative AI https://learn.microsoft.com/en-us/training/modules/fundamentals-generative-ai/
• What’s new in Azure AI Search https://learn.microsoft.com/en-us/azure/search/whats-new
Related episodes
• Armchair Architects: LLMs & Vector Databases (pt. 1) https://aka.ms/azenable/141
• Watch more episodes in the Armchair Architects Series
https://aka.ms/azenable/ArmchairArchitects
• Watch more episodes in the Well-Architected Series
https://aka.ms/azenable/yt/wa-playlist
Chapters
0:00 Introduction
0:15 Recap on Embedding
0:39 Consider prompt engineering first
2:06 Fine tune your data
2:58 Help with hallucinations
4:49 LLMs and math
4:59 LLMs and structured data
5:19 Add code to prompts
5:52 Pipeline based programming
7:54 Vector database vs. vector Index
9:00 Arriving at vectorization
9:52 Vectorization alone isn’t the answer Read More Microsoft Developer
+ There are no comments
Add yours