Chip industry strains to meet AI-fueled demands — will smaller LLMs help?

Estimated read time 2 min read

Generative artificial intelligence (AI) in the form of natural-language processing technology has taken the world by storm, with organizations large and small rushing to pilot it in a bid to find new efficiencies and automate tasks.

Tech giants Google, Microsoft, and Amazon are all offering cloud-based genAI technologies or baking them into their business apps for users, with global spending on AI by companies expected to reach $301 billion by 2026, according to IDC.

But genAI tools consume a lot of computational resources, primarily for training up the large language models (LLMs) that underpin  the likes of OpenAI’s ChatGPT and Google’s Bard. As the use of genAI increases, so too does the strain on the hardware used to run those models, which are the information storehouses for natural language processing.

To read this article in full, please click here

​ Generative artificial intelligence (AI) in the form of natural-language processing technology has taken the world by storm, with organizations large and small rushing to pilot it in a bid to find new efficiencies and automate tasks.Tech giants Google, Microsoft, and Amazon are all offering cloud-based genAI technologies or baking them into their business apps for users, with global spending on AI by companies expected to reach $301 billion by 2026, according to IDC.But genAI tools consume a lot of computational resources, primarily for training up the large language models (LLMs) that underpin  the likes of OpenAI’s ChatGPT and Google’s Bard. As the use of genAI increases, so too does the strain on the hardware used to run those models, which are the information storehouses for natural language processing.To read this article in full, please click here   Read More Computerworld 

You May Also Like

More From Author

+ There are no comments

Add yours