Supercharging GenAI: Why Narrow Intelligence might be the Missing Piece

Estimated read time 24 min read

Abstract:

Before proceeding, answer these two questions:

What is 1 + 1? Write down the answer.What is 5344 x 5? Write down the answer.

The first question likely elicits an immediate response of “2” from memory, without calculation. This demonstrates retrieval of stored information from your brain neurons.

The second question, however, requires active computation. You might approach it by first calculating 5344 x 10 = 53440, then halving the result. Or you used a calculator. This process illustrates the interplay between General Intelligence (planning the approach) and specialized mathematical operations (performing the calculation). These specialized operations are part of, what I call, the “Narrow Intelligence System”. This system is crucial for meaningful expression of intelligence, as it provides the tools for specific cognitive tasks.

In this blog I propose a paradigm shift in Generative AI (GenAI) development by advocating for the deep integration of Narrow Intelligence (NI) Systems – specifically Theories, Tools & Systems – into the core architecture of GenAI models. While current GenAI models exhibit remarkable capabilities, they are often prone to generating inaccurate or nonsensical outputs, a phenomenon known as “hallucination.” I argue that this limitation stems from a reliance on learned patterns rather than a grounded understanding of real-world systems. Just as the human brain has specialized regions for different functions, a standalone GenAI LLM cannot encompass all capabilities. I propose that integrating Narrow Intelligence (NI) systems into the core architecture of GenAI models could enhance their performance, potentially approaching the functionality of a more comprehensive General Intelligence entity. 

 

Definitions:

Let’s start by defining key terms. Subsequently, we’ll extrapolate these concepts and apply them to the realm of Generative AI (GenAI).

General Intelligence (GI) is a flexible, adaptive form of intelligence capable of reasoning, learning, and problem-solving across a wide range of domains. General Intelligence comprises the following components (non-exhaustive and interconnected):

Level 1 – Foundational Cognition: Logical, rational, and scientific thinking

Level 2 – Information Processing: Attention, memory, and pattern recognition

Level 3 – Higher-Order Cognition: Abstraction, creativity, and meta-cognition

Level 4 – Executive Functions: Decision-making, planning, problem-solving, adaptability, and emotional intelligence

Level 5 – Learning and Adaptation: Ability to acquire new knowledge and skills and modify behavior based on experience

Level 6 – Contextual Understanding: Grasp of complex, nuanced situations and the ability to apply knowledge appropriately

Humans are currently the only known entities to have achieved the highest levels of general intelligence.

Now, let’s introduce the term Narrow Intelligence. This concept builds upon the established notion of Narrow AI, encompassing a broader scope. Narrow AI is a subset within this larger framework.

Narrow Intelligence (NI) can be defined as a specialized intelligence focused on specific tasks or domains, often outperforming general intelligence in these areas.

Narrow Intelligence (NI) broadly comprises:

Theories: Encompasses knowledge structures, as well as models, paradigms, and heuristics. Theories are structured ways of understanding and explaining phenomena, which can include various forms of knowledge organization. Examples include physics, chemistry, game theory, economics, and a country’s constitution.

Tools: This encompasses physical tools, digital tools, algorithms, methods, and techniques. Examples include 3D printers, calculators, algorithms, software, and more.

Systems: This includes large-scale systems like enterprises or government bodies, as well as frameworks, processes, and organized bodies of information or data. Examples include Enterprise Resource Planning (ERP) systems, air traffic control systems, any enterprise, ERP, government systems, and so on.

For the sake of clarity, I will exclude the topics of sentience and consciousness.

 

Evolution of Human General Intelligence and Narrow Intelligence Systems

Through evolutionary processes, humans have developed into natural problem solvers. A confluence of factors seemed to have created an optimal environment for humans to distinguish themselves from other animals, achieving the highest known levels of general intelligence. Humans then leveraged this General Intelligence to create various forms of Narrow Intelligence, in the form of Theories, Tools, and Systems.

One can argue that this innate ability to use General Intelligence (all six+ levels) in developing theories, tools, and systems can be viewed as humanity’s superpower that enabled us to build our civilization and further our understanding of the universe. By externalizing complex processes into tools and systems, we conserve cognitive resources. Consider the impracticality of mentally performing all calculations required for systems like banking, stock exchanges, or the global economy. Instead, by creating Theories, Tools, and Systems that perform these processes exponentially faster, we free up our mental capacity for higher-level thinking and innovation.

Look around you. Nearly everything you see falls into one of three categories of Narrow Intelligence: Tools, Systems, or Theories. The laptop, phone, lamp, TV set, chair, walls, the building, every one of them is Tool or a System that was built using a set of Tools and Systems. All the textbooks, blogs, papers are all knowledge bundles covered in the category of Theories. The only few exceptions might include food and creative works like art or paintings. Our ability to create and outsource complex tasks and processes to Narrow Intelligent systems contributed to the current pinnacle of technological advancement in our civilization. We not only stand on the shoulders of giants but also on the stacks of Tools, systems, and Theories.

Let me summarize the above discussions into the following:

We Human Beings are problem solvers.

We possess the highest level of General Intelligence.

We extensively utilize Narrow Intelligence systems to solve problems

Usage of Narrow Intelligence systems frees up our cognitive capacity to address complex problems.

We utilize our General Intelligence and existing Narrow Intelligence systems to construct increasingly complex Narrow Intelligence systems – which allow us to address increasingly complex problems

Note: The key points are 4 and 5. The converse of it is also true. If there are shortcomings in the Narrow Intelligent systems, we improvise them and fix them. But we do NOT take it inside General Intelligence. We will revisit this later.

 

Definitions

Having established these points, let me now draw an analogy to Generative AI. But first, the definitions:

Artificial General Intelligence (AGI) is a type of artificial intelligence that matches or surpasses human (General) Intelligence and capabilities across a wide range of cognitive tasks.

Hallucination (in the context of Large Language Models) refers to a phenomenon where the model generates text that appears coherent and plausible but contains factual inaccuracies, inconsistencies, or completely fabricated information.

 

Integrating Narrow Intelligence deep into Gen AI Models 

The field of Artificial Intelligence has witnessed remarkable advancements over the years, and the trajectory of AI development suggests that we are moving toward the eventual creation of Artificial General Intelligence (AGI).

Large Language Model (LLM) based Generative AI models – such as ChatGPT, Claude, Gemini, Llama – can be viewed as early precursors to AGI, perhaps analogous to an “AGI version 0.2”.

Given that current Generative AI models demonstrate characteristics aligned with levels 1-6 of our previously discussed General Intelligence framework, it is reasonable to extend our five points about human General Intelligence to these AI models.

Here they are:

Conjecture A:

Generative AI models are problem solvers (in the form of prompt responders).

Generative AI models possess some degree of General Intelligence (Highest level reached at AGI).

Generative AI models must extensively utilize Narrow Intelligence systems to solve problems

Usage of Narrow Intelligence systems greatly reduces hallucinations and allows to address complex problems.

Generative AI models and existing Narrow Intelligence systems should be used to construct increasingly complex Narrow Intelligence systems – which will allow us to address increasingly complex problems

Yet again, the key points are 4 and 5. The converse of it will also be true. If there are shortcomings in the Narrow Intelligent systems, we improvise them and fix them. But we should NOT take it into Generative AI model.

 

Let’s take a few concrete examples (Responses from GPT4 as of Aug 1st, 2024).

 

Prompt 1: Person A is older than B, person B is older than C. Is person A older than C?
LLM Response 1: Yes, based on the given information, person A is indeed older than person C.

Prompt 2: Which number is higher. 9.11 or 9.9
LLM Response 2: The number 9.11 is higher than 9.9.

Prompt 3: Write the word Lollipop backwards
LLM Response 3: polliopopL

 

The initial LLM response 1, though correct, emerges from the model’s reasoning rather than from using specific tools. However, the 2nd and 3rd responses demonstrate a limitation of this approach, resulting in an incorrect answer and hallucination.

To improve accuracy and reduce hallucinations, what if the model recognizes when to use appropriate mathematical tools instead of relying solely on emergent behaviors from its training? Then the first prompt will be resolved and answered in the following way:

 

Prompt 1: Person A is older than B, person B is older than C. Is person A older than C?
LLM Internal reasoning: Boolean Algebra and Python
LLM Internal mathematical resolution: A > B and B > C. result = A > C
LLM Response: Yes, based on the given information, person A is indeed older than person C

 

This “internal resolution” can be input into an executable algorithm or mathematical software, which can then perform precise calculations and return definitive results. The same holds good for prompt 2.

Prompt 2: Which number is higher. 9.11 or 9.9
LLM Internal reasoning: Boolean Algebra and Python
LLM Internal algorithmic resolution (python): result = 9.11 > 9.9
LLM Response: The number 9.9 is higher than 9.11.

Prompt 3: Write the word Lollipop backwards
LLM Internal reasoning: Algorithmic programming and Python
LLM Internal algorithmic resolution (python): reversed_word = ‘Lollipop'[::-1]
LLM Response: popilloL

 

Before generating a response, GenAI model should engage in an internal “reasoning” process. During this stage, the model analyzes the prompt to determine if tools are necessary for formulating an answer. If required, it identifies the appropriate tools and prepares the tool response. This intermediary step allows the model to structure its output more effectively, combining its inherent knowledge with tool-assisted computations or data retrieval when needed.

The Python algorithms or mathematical equations generated by the LLM (in the Prompt 2 and 3) can be stored for future reuse. This persistence of these algorithms in a separate system (outside of GenAI Model) allows for efficient recall and application in similar contexts. If a user reports an error in the response, then the code can be corrected and updated as part of an iterative refinement process. This approach enables continuous improvement of the LLM‘s code generation capabilities and ensures the accuracy of stored algorithms.

While these examples are relatively simple, this concept can be extended to much more complex problems/prompts. GenAI‘s ability to identify patterns imperceptible to humans is already well-established. By effectively decomposing intricate issues and leveraging a combination of tools, systems, and theoretical frameworks, GenAI has the potential to tackle sophisticated challenges with remarkable efficiency.

This concept can be extended to accessing up-to-date information beyond the model’s training cutoff. For example, when prompted with a set of symptoms and asked for a diagnosis, the model could first access a specialized database of recent medical literature. It would then perform Retrieval-Augmented Generation (RAG), integrating this current information with its existing knowledge before providing a final diagnosis. This approach combines the model’s reasoning capabilities with the most recent domain-specific data, potentially improving the accuracy and relevance of its output.

A significant advantage of this approach is the elimination of compute intensive model pre-training. Instead, only code updates are required, resulting in a more efficient process. This method reduces computational resources and time typically associated with model pre-training, allowing for faster deployment and iteration of AI systems.

This leads me to the next conjecture:

Conjecture B:

Generative AI models should be capable of interpreting prompts and determining how to leverage Narrow Intelligence Systems appropriately. The integration of General Intelligence capabilities with Narrow Intelligence tools results in highly efficient scaling and significantly reduces the occurrence of hallucinations.

Narrow Intelligence systems are typically designed for specific tasks in the physical world, thus grounding their responses in reality. When properly defined and curated, the NI systems do not hallucinate or generate false information. While they may contain errors or bugs that require correction, their responses are generally definitive and accurately represent real-world scenarios.

Integrating Generative AI models with Narrow AI systems is most effective when done during the pre-training phase and incorporated into the core architecture. However, in the interim, similar results may be achievable using external agents, such as those implemented with AutoGen or other Agent frameworks. These agents can interpret prompts, determine appropriate tool usage, and then perform the required actions. This approach provides a bridge between generative capabilities and specific task execution until more integrated solutions are developed.

 

Limitations

This approach has certain limitations. During pre-training or fine-tuning, GenAI models gain experience by modifying neurons in their neural networks, similar to how humans learn from trial and error. However, when GenAI model use external tools, they don’t gain experience in the same way. No neural network modifications occur, akin to upgrading software without improving the underlying system. While integrating narrow AI systems may address hallucinations, models still require pre-training to develop their core capabilities and reasoning skills.

 

Conclusion

By effectively integrating Narrow Intelligence into the core architecture of Generative AI models, we unlock a vast potential for improvement. This integration not only addresses the persistent problem of hallucinations but also empowers GenAI to tackle increasingly complex challenges with greater accuracy, efficiency, and real-world relevance. The future of GenAI lies not solely in its ability to learn and reason but also in its capacity to harness the power of existing tools, systems, and knowledge structures. By bridging the gap between General and Narrow Intelligence, we pave the way for a new era of AI that truly understands and interacts with the world around us.

One more thing. This approach, if implemented as proposed, could potentially lead to solving the ARC challenge. (The ARC challenge is a benchmark test designed to evaluate artificial intelligence systems on their ability to perform general reasoning tasks)

 

References

Toolformer: Language Models Can Teach Themselves to Use Tools

LARGE LANGUAGE MODELS AS TOOL MAKERS

ReAcT: SYNERGIZING REASONING AND ACTING IN LANGUAGE MODELS

TALM: Tool Augmented Language Models

ARC-AGI Challenge

AutoGen

 

 

​ Abstract:Before proceeding, answer these two questions:What is 1 + 1? Write down the answer.What is 5344 x 5? Write down the answer.The first question likely elicits an immediate response of “2” from memory, without calculation. This demonstrates retrieval of stored information from your brain neurons.The second question, however, requires active computation. You might approach it by first calculating 5344 x 10 = 53440, then halving the result. Or you used a calculator. This process illustrates the interplay between General Intelligence (planning the approach) and specialized mathematical operations (performing the calculation). These specialized operations are part of, what I call, the “Narrow Intelligence System”. This system is crucial for meaningful expression of intelligence, as it provides the tools for specific cognitive tasks.In this blog I propose a paradigm shift in Generative AI (GenAI) development by advocating for the deep integration of Narrow Intelligence (NI) Systems – specifically Theories, Tools & Systems – into the core architecture of GenAI models. While current GenAI models exhibit remarkable capabilities, they are often prone to generating inaccurate or nonsensical outputs, a phenomenon known as “hallucination.” I argue that this limitation stems from a reliance on learned patterns rather than a grounded understanding of real-world systems. Just as the human brain has specialized regions for different functions, a standalone GenAI LLM cannot encompass all capabilities. I propose that integrating Narrow Intelligence (NI) systems into the core architecture of GenAI models could enhance their performance, potentially approaching the functionality of a more comprehensive General Intelligence entity.  Definitions:Let’s start by defining key terms. Subsequently, we’ll extrapolate these concepts and apply them to the realm of Generative AI (GenAI).General Intelligence (GI) is a flexible, adaptive form of intelligence capable of reasoning, learning, and problem-solving across a wide range of domains. General Intelligence comprises the following components (non-exhaustive and interconnected):Level 1 – Foundational Cognition: Logical, rational, and scientific thinkingLevel 2 – Information Processing: Attention, memory, and pattern recognitionLevel 3 – Higher-Order Cognition: Abstraction, creativity, and meta-cognitionLevel 4 – Executive Functions: Decision-making, planning, problem-solving, adaptability, and emotional intelligenceLevel 5 – Learning and Adaptation: Ability to acquire new knowledge and skills and modify behavior based on experienceLevel 6 – Contextual Understanding: Grasp of complex, nuanced situations and the ability to apply knowledge appropriatelyHumans are currently the only known entities to have achieved the highest levels of general intelligence.Now, let’s introduce the term Narrow Intelligence. This concept builds upon the established notion of Narrow AI, encompassing a broader scope. Narrow AI is a subset within this larger framework.Narrow Intelligence (NI) can be defined as a specialized intelligence focused on specific tasks or domains, often outperforming general intelligence in these areas.Narrow Intelligence (NI) broadly comprises:Theories: Encompasses knowledge structures, as well as models, paradigms, and heuristics. Theories are structured ways of understanding and explaining phenomena, which can include various forms of knowledge organization. Examples include physics, chemistry, game theory, economics, and a country’s constitution.Tools: This encompasses physical tools, digital tools, algorithms, methods, and techniques. Examples include 3D printers, calculators, algorithms, software, and more.Systems: This includes large-scale systems like enterprises or government bodies, as well as frameworks, processes, and organized bodies of information or data. Examples include Enterprise Resource Planning (ERP) systems, air traffic control systems, any enterprise, ERP, government systems, and so on.For the sake of clarity, I will exclude the topics of sentience and consciousness. Evolution of Human General Intelligence and Narrow Intelligence SystemsThrough evolutionary processes, humans have developed into natural problem solvers. A confluence of factors seemed to have created an optimal environment for humans to distinguish themselves from other animals, achieving the highest known levels of general intelligence. Humans then leveraged this General Intelligence to create various forms of Narrow Intelligence, in the form of Theories, Tools, and Systems.One can argue that this innate ability to use General Intelligence (all six+ levels) in developing theories, tools, and systems can be viewed as humanity’s superpower that enabled us to build our civilization and further our understanding of the universe. By externalizing complex processes into tools and systems, we conserve cognitive resources. Consider the impracticality of mentally performing all calculations required for systems like banking, stock exchanges, or the global economy. Instead, by creating Theories, Tools, and Systems that perform these processes exponentially faster, we free up our mental capacity for higher-level thinking and innovation.Look around you. Nearly everything you see falls into one of three categories of Narrow Intelligence: Tools, Systems, or Theories. The laptop, phone, lamp, TV set, chair, walls, the building, every one of them is Tool or a System that was built using a set of Tools and Systems. All the textbooks, blogs, papers are all knowledge bundles covered in the category of Theories. The only few exceptions might include food and creative works like art or paintings. Our ability to create and outsource complex tasks and processes to Narrow Intelligent systems contributed to the current pinnacle of technological advancement in our civilization. We not only stand on the shoulders of giants but also on the stacks of Tools, systems, and Theories.Let me summarize the above discussions into the following:We Human Beings are problem solvers.We possess the highest level of General Intelligence.We extensively utilize Narrow Intelligence systems to solve problemsUsage of Narrow Intelligence systems frees up our cognitive capacity to address complex problems.We utilize our General Intelligence and existing Narrow Intelligence systems to construct increasingly complex Narrow Intelligence systems – which allow us to address increasingly complex problemsNote: The key points are 4 and 5. The converse of it is also true. If there are shortcomings in the Narrow Intelligent systems, we improvise them and fix them. But we do NOT take it inside General Intelligence. We will revisit this later. DefinitionsHaving established these points, let me now draw an analogy to Generative AI. But first, the definitions:Artificial General Intelligence (AGI) is a type of artificial intelligence that matches or surpasses human (General) Intelligence and capabilities across a wide range of cognitive tasks.Hallucination (in the context of Large Language Models) refers to a phenomenon where the model generates text that appears coherent and plausible but contains factual inaccuracies, inconsistencies, or completely fabricated information. Integrating Narrow Intelligence deep into Gen AI Models The field of Artificial Intelligence has witnessed remarkable advancements over the years, and the trajectory of AI development suggests that we are moving toward the eventual creation of Artificial General Intelligence (AGI).Large Language Model (LLM) based Generative AI models – such as ChatGPT, Claude, Gemini, Llama – can be viewed as early precursors to AGI, perhaps analogous to an “AGI version 0.2”.Given that current Generative AI models demonstrate characteristics aligned with levels 1-6 of our previously discussed General Intelligence framework, it is reasonable to extend our five points about human General Intelligence to these AI models.Here they are:Conjecture A:Generative AI models are problem solvers (in the form of prompt responders).Generative AI models possess some degree of General Intelligence (Highest level reached at AGI).Generative AI models must extensively utilize Narrow Intelligence systems to solve problemsUsage of Narrow Intelligence systems greatly reduces hallucinations and allows to address complex problems.Generative AI models and existing Narrow Intelligence systems should be used to construct increasingly complex Narrow Intelligence systems – which will allow us to address increasingly complex problemsYet again, the key points are 4 and 5. The converse of it will also be true. If there are shortcomings in the Narrow Intelligent systems, we improvise them and fix them. But we should NOT take it into Generative AI model. Let’s take a few concrete examples (Responses from GPT4 as of Aug 1st, 2024). Prompt 1: Person A is older than B, person B is older than C. Is person A older than C?LLM Response 1: Yes, based on the given information, person A is indeed older than person C.Prompt 2: Which number is higher. 9.11 or 9.9LLM Response 2: The number 9.11 is higher than 9.9.Prompt 3: Write the word Lollipop backwardsLLM Response 3: polliopopL The initial LLM response 1, though correct, emerges from the model’s reasoning rather than from using specific tools. However, the 2nd and 3rd responses demonstrate a limitation of this approach, resulting in an incorrect answer and hallucination.To improve accuracy and reduce hallucinations, what if the model recognizes when to use appropriate mathematical tools instead of relying solely on emergent behaviors from its training? Then the first prompt will be resolved and answered in the following way: Prompt 1: Person A is older than B, person B is older than C. Is person A older than C?LLM Internal reasoning: Boolean Algebra and Python LLM Internal mathematical resolution: A > B and B > C. result = A > C LLM Response: Yes, based on the given information, person A is indeed older than person C This “internal resolution” can be input into an executable algorithm or mathematical software, which can then perform precise calculations and return definitive results. The same holds good for prompt 2.Prompt 2: Which number is higher. 9.11 or 9.9LLM Internal reasoning: Boolean Algebra and Python LLM Internal algorithmic resolution (python): result = 9.11 > 9.9LLM Response: The number 9.9 is higher than 9.11.Prompt 3: Write the word Lollipop backwardsLLM Internal reasoning: Algorithmic programming and Python LLM Internal algorithmic resolution (python): reversed_word = ‘Lollipop'[::-1]LLM Response: popilloL Before generating a response, GenAI model should engage in an internal “reasoning” process. During this stage, the model analyzes the prompt to determine if tools are necessary for formulating an answer. If required, it identifies the appropriate tools and prepares the tool response. This intermediary step allows the model to structure its output more effectively, combining its inherent knowledge with tool-assisted computations or data retrieval when needed.The Python algorithms or mathematical equations generated by the LLM (in the Prompt 2 and 3) can be stored for future reuse. This persistence of these algorithms in a separate system (outside of GenAI Model) allows for efficient recall and application in similar contexts. If a user reports an error in the response, then the code can be corrected and updated as part of an iterative refinement process. This approach enables continuous improvement of the LLM’s code generation capabilities and ensures the accuracy of stored algorithms.While these examples are relatively simple, this concept can be extended to much more complex problems/prompts. GenAI’s ability to identify patterns imperceptible to humans is already well-established. By effectively decomposing intricate issues and leveraging a combination of tools, systems, and theoretical frameworks, GenAI has the potential to tackle sophisticated challenges with remarkable efficiency.This concept can be extended to accessing up-to-date information beyond the model’s training cutoff. For example, when prompted with a set of symptoms and asked for a diagnosis, the model could first access a specialized database of recent medical literature. It would then perform Retrieval-Augmented Generation (RAG), integrating this current information with its existing knowledge before providing a final diagnosis. This approach combines the model’s reasoning capabilities with the most recent domain-specific data, potentially improving the accuracy and relevance of its output.A significant advantage of this approach is the elimination of compute intensive model pre-training. Instead, only code updates are required, resulting in a more efficient process. This method reduces computational resources and time typically associated with model pre-training, allowing for faster deployment and iteration of AI systems.This leads me to the next conjecture:Conjecture B:Generative AI models should be capable of interpreting prompts and determining how to leverage Narrow Intelligence Systems appropriately. The integration of General Intelligence capabilities with Narrow Intelligence tools results in highly efficient scaling and significantly reduces the occurrence of hallucinations.Narrow Intelligence systems are typically designed for specific tasks in the physical world, thus grounding their responses in reality. When properly defined and curated, the NI systems do not hallucinate or generate false information. While they may contain errors or bugs that require correction, their responses are generally definitive and accurately represent real-world scenarios.Integrating Generative AI models with Narrow AI systems is most effective when done during the pre-training phase and incorporated into the core architecture. However, in the interim, similar results may be achievable using external agents, such as those implemented with AutoGen or other Agent frameworks. These agents can interpret prompts, determine appropriate tool usage, and then perform the required actions. This approach provides a bridge between generative capabilities and specific task execution until more integrated solutions are developed. LimitationsThis approach has certain limitations. During pre-training or fine-tuning, GenAI models gain experience by modifying neurons in their neural networks, similar to how humans learn from trial and error. However, when GenAI model use external tools, they don’t gain experience in the same way. No neural network modifications occur, akin to upgrading software without improving the underlying system. While integrating narrow AI systems may address hallucinations, models still require pre-training to develop their core capabilities and reasoning skills. ConclusionBy effectively integrating Narrow Intelligence into the core architecture of Generative AI models, we unlock a vast potential for improvement. This integration not only addresses the persistent problem of hallucinations but also empowers GenAI to tackle increasingly complex challenges with greater accuracy, efficiency, and real-world relevance. The future of GenAI lies not solely in its ability to learn and reason but also in its capacity to harness the power of existing tools, systems, and knowledge structures. By bridging the gap between General and Narrow Intelligence, we pave the way for a new era of AI that truly understands and interacts with the world around us.One more thing. This approach, if implemented as proposed, could potentially lead to solving the ARC challenge. (The ARC challenge is a benchmark test designed to evaluate artificial intelligence systems on their ability to perform general reasoning tasks) ReferencesToolformer: Language Models Can Teach Themselves to Use ToolsLARGE LANGUAGE MODELS AS TOOL MAKERSReAcT: SYNERGIZING REASONING AND ACTING IN LANGUAGE MODELSTALM: Tool Augmented Language ModelsARC-AGI ChallengeAutoGen    Read More Technology Blogs by SAP articles 

#SAP

#SAPTechnologyblog

You May Also Like

More From Author

+ There are no comments

Add yours