Extending Llama-3 to 1M+ Tokens – Does it Impact the Performance?

Estimated read time 1 min read

Post Content

 

​ In this video we will look at the 1M+ context version of the best open llm, llama-3 built by gradientai.

? Discord: https://discord.com/invite/t4eYQRUcXB
☕ Buy me a Coffee: https://ko-fi.com/promptengineering
|? Patreon: https://www.patreon.com/PromptEngineering
?Consulting: https://calendly.com/engineerprompt/consulting-call
? Business Contact: engineerprompt@gmail.com
Become Member: http://tinyurl.com/y5h28s6h

? Pre-configured localGPT VM: https://bit.ly/localGPT (use Code: PromptEngineering for 50% off).

Signup for Advanced RAG:
https://tally.so/r/3y9bb0

LINKS:
Model: https://ollama.com/library/llama3-gradient
Ollama tutorial: https://youtu.be/MGr1V4LyGFA?si=oB5NYXj5W-sXBwjw

TIMESTAMPS:
[00:00] LLAMA-3 1M+
[00:57] Needle in Haystack test
[02:45] How its trained?
[03:32] Setting Up and Running Llama3 Locally
[05:45] Responsiveness and Censorship
[07:25] Advanced Reasoning and Information Retrieval

All Interesting Videos:
Everything LangChain: https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr

Everything LLM: https://youtube.com/playlist?list=PLVEEucA9MYhNF5-zeb4Iw2Nl1OKTH-Txw

Everything Midjourney: https://youtube.com/playlist?list=PLVEEucA9MYhMdrdHZtFeEebl20LPkaSmw

AI Image Generation: https://youtube.com/playlist?list=PLVEEucA9MYhPVgYazU5hx6emMXtargd4z   Read More Prompt Engineering 

#AI #promptengineering

You May Also Like

More From Author

+ There are no comments

Add yours