Anthropic’s Claude 2.1 LLM turbocharges performance, offers beta tool use

Estimated read time 1 min read

Anthropic has upped the ante for how much information a large language model (LLM) can consume at once, announcing on Tuesday that its just-released Claude 2.1 has a context window of 200,000 tokens. That’s roughly the equivalent of 500,000 words or more than 500 printed pages of information, Anthropic said.

The latest Claude version also is more accurate than its predecessor, has a lower price, and includes beta tool use, the company said in its announcement.

To read this article in full, please click here

​ Anthropic has upped the ante for how much information a large language model (LLM) can consume at once, announcing on Tuesday that its just-released Claude 2.1 has a context window of 200,000 tokens. That’s roughly the equivalent of 500,000 words or more than 500 printed pages of information, Anthropic said.The latest Claude version also is more accurate than its predecessor, has a lower price, and includes beta tool use, the company said in its announcement.To read this article in full, please click here   Read More Computerworld 

You May Also Like

More From Author

+ There are no comments

Add yours