Fine-tuning and distillation with Azure AI Foundry | BRK150

Estimated read time 1 min read

Post Content

​ Enhance your AI models using advanced fine‑tuning and distillation techniques that deliver higher accuracy and efficiency. This session explores the latest techniques in Direct Preference Optimization (DPO), Reward Fine-Tuning (RFT), and model distillation within Azure OpenAI. Learn to optimize performance while reducing data requirements and operational costs for smoother deployments.

To learn more, please check out these resources:
* https://aka.ms/build25/plan/CreateAgenticAISolutions

𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀:
* Srinivas Gadde
* Omkar More
* Alicia Frame

𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻:
This is one of many sessions from the Microsoft Build 2025 event. View even more sessions on-demand and learn about Microsoft Build at https://build.microsoft.com

BRK150 | English (US) | AI, Copilot & Agents

#MSBuild

Chapters:
0:00 – Overview of Today’s Session Topics
00:16:01 – Explanation of Model Evaluation and Deploy Process
00:19:15 – Practical Demonstration of Model Fine-Tuning and Deployment
00:30:32 – Demonstration of Reinforcement Fine Tuning with Limited Data Samples
00:34:19 – Completion of Reinforcement Fine Tuning and Deployment
00:47:46 – Benefits of Fine Tuning for Latency
00:49:25 – Model Choices and Fine Tuning Versus Prompt Engineering   Read More Microsoft Developer 

You May Also Like

More From Author