Post Content
Running Large Language Models (LLMs) locally represents a revolutionary shift in an era where cloud reliance is the norm. This approach amplifies your control over data privacy and processing speeds and unlocks a new level of customization and immediate integration into your systems. Whether you’re just starting with LLMs or you’ve been exploring their capabilities for some time, this demo will enhance your comprehension of the possibilities that open up when you run LLMs locally.
To learn more, please check out these resources:
* https://aka.ms/build25/plan/BestModelGenAISolution
𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀:
* Rodrigo Diaz Concha
𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻:
This is one of many sessions from the Microsoft Build 2025 event. View even more sessions on-demand and learn about Microsoft Build at https://build.microsoft.com
DEM524 | English (US) | AI, Copilot & Agents
#MSBuild
Chapters:
0:00 – Introduction of Rodrigo Diaz Conscha
00:04:19 – Downloading Specific Software Versions
00:04:42 – Management of Software and Models
00:05:40 – Demonstration of Loading Models on Foundry
00:08:00 – Model Loading for Different Processors
00:08:05 – Capabilities of Foundry Local
00:08:45 – CPU Performance with Model
00:14:55 – Demonstration of Local Model Functionality without Internet
00:15:08 – Benefits of Running Local Large Language Models Read More Microsoft Developer