Generative AI application stack and providing long term memory to LLMs | ODFP612

Estimated read time 1 min read

Post Content

​ Learn about the role of long-term memory for Large Language Models (LLMs) in building highly performant and cost-effective Generative AI applications, like Semantic Search, Retrieval Augment Generation (RAG), and AI-agent-powered applications. Learn how Microsoft Semantic Kernel, MongoDB Atlas vector search, and search nodes running on Microsoft Cloud can streamline the process for developers to build enterprise-grade LLM-powered applications.

????????:
* Prakul Agarwal

??????? ???????????:
This video is one of many sessions delivered for the Microsoft Build 2024 event. View the full session schedule and learn more about Microsoft Build at https://build.microsoft.com

ODFP612 | English (US)

#MSBuild   Read More Microsoft Developer 

You May Also Like

More From Author

+ There are no comments

Add yours