Post Content
✏️ Study this course interactively on Scrimba: https://v2.scrimba.com/intro-to-mistral-ai-c035?utm_source=youtube&utm_medium=video&utm_campaign=fcc-mistral
Learn how to use the Mistral AI to build intelligent apps, all the way from simple chat completions to advanced use-cases like RAG and function calling. Created in collaboration between Mistral AI and Scrimba.
Code is available on the Scrimba course page for each lesson. You will learn how to build intelligent applications that span from straightforward chat completions to complex functionalities like Retrieval-Augmented Generation (RAG) and function calling.
Starting off, you’ll get an introduction to Mistral’s open-source models, including Mistral 7B and Mistral 8x7B, and progress to their commercial models. You’ll gain hands-on experience leveraging the full suite of Mistral’s La Plateforme.
The rest of the course is mainly focused on two essential paradigms in AI engineering: knowledge retrieval and AI agents. In the first part, you’ll learn how to split text documents with LangChain, convert them into embeddings, store them in a vector database, and finally perform retrieval.
In the AI agents segment, you’ll learn how to give Mistral access to the functions within your app, and let their models decide when to call them. This skill will enable you to create a whole new type of user experience, where people can interact with your apps through conversation instead of mere clicking.
Towards the end, we will also go through how you can use Ollama to easily run inference on your own computer, and use it as the backbone of any AI app you develop locally.
Created by Per Borgen from Scrimba. https://www.youtube.com/c/Scrimba
⭐️ Contents ⭐️
⌨️ (0:00:00) Welcome to the course
⌨️ (0:02:36) Intro to Mistral by Sophia Yang
⌨️ (0:05:41) Sign up for La Plateforme
⌨️ (0:07:08) Mistral’s Chat Completion API
⌨️ (0:11:19) Mistral’s Chat Completion API – part 2
⌨️ (0:15:20) Mistral’s models
⌨️ (0:19:54) What is RAG?
⌨️ (0:24:19) What are embeddings?
⌨️ (0:30:35) RAG – Chunking text with LangChain
⌨️ (0:35:27) RAG – Completing the splitDocument function
⌨️ (0:37:20) RAG – Creating our very first embedding
⌨️ (0:39:51) RAG – Challenge: embedding all chunks and preparing it for the vector db
⌨️ (0:44:34) Set up your vector database
⌨️ (0:47:27) Vector databases
⌨️ (0:47:51) RAG – Uploading data to the vector db
⌨️ (0:50:36) RAG – Query and Create completion
⌨️ (0:55:30) RAG – Improve the retrieval and complete the generation
⌨️ (1:00:26) Function calling
⌨️ (1:05:46) Function calling – Adding a second function
⌨️ (1:08:11) Function calling – Unpacking the function and arguments
⌨️ (1:12:00) Function calling – Making the call
⌨️ (1:13:49) Function calling – Updating the messages array
⌨️ (1:15:49) Function calling – Creating the loop
⌨️ (1:18:41) Running Mistral locally
⌨️ (1:22:39) Outro & recap – Mistral AI
? Thanks to our Champion and Sponsor supporters:
? davthecoder
? jedi-or-sith
? 南宮千影
? Agustín Kussrow
? Nattira Maneerat
? Heather Wcislo
? Serhiy Kalinets
? Justin Hual
? Otis Morgan
? Oscar Rahnama
—
Learn to code for free and get a developer job: https://www.freecodecamp.org
Read hundreds of articles on programming: https://freecodecamp.org/news Read More freeCodeCamp.org
#programming #freecodecamp #learn #learncode #learncoding
+ There are no comments
Add yours