Generative AI has broken out of research labs and is now transforming the way business is done. SAP is moving at full speed to embrace this trend and has launched an agent called Joule. In this blog series, I’ll provide a “super-fast hands-on” guide to help you quickly call default models of SAP AI Core and expand them into practical AI agents for real-world business use, so you can understand how these agents work behind the scenes.
Notice
The Japanese version is available here.
What You’ll Learn in This Series
How to spin up a custom AI agent on SAP AI Core in minutesHands‑on with LangChain, Google Search Tool, RAG, and StreamlitExposing the agent as a REST API and rebuilding the UI in SAPUI5/Fiori
Time Commitment
Each part is designed to be completed in 10–15 minutes
️ Series Roadmap
Part 0 ProloguePart 1 Env Setup: SAP AICore & AI LaunchpadPart 2 Building a Chat Model with LangChainPart 3 Agent Tools: Integrating Google Search [current blog]Part 4 RAG Basics ①: HANA Cloud VectorEngine & EmbeddingPart 5 RAG Basics ②: Building Retriever ToolPart 6 Streamlit BasicsPart 7 Streamlit UI PrototypePart 8 Expose as a REST APIPart 9 Rebuild the UI with SAPUI5 1st Half Part 10 Rebuild the UI with SAPUI5 2nd Half
If you enjoyed this post, please give it a kudos! Your support really motivates me. Also, if there’s anything you’d like to know more about, feel free to leave a comment!
Agent Tools: Integrating Google Search
1 | Overview
AI Agent is more than just chat models. Following the ReAct (Reason + Act) paradigm proposed by Yao et al. in 2022, LLMs are now able to both generate reasoning processes and perform concrete actions, such as calling external tools. For example, by executing actions like Google Search or database queries and then passing those results back to the LLM, the next step of reasoning is always backed by up-to-date information.
This approach helps to suppress hallucinations and enables the development of AI Agent that generate much more accurate responses.
LangChain the “ReAct agent” pattern is already built‑in. We just register tools (like google_search) and LangChain decides, at runtime, whether the model should think or act.
2 | Prerequisites
BTP sub-accountSAP AI Core instanceSAP AI LaunchPad SubscriptionPython 3.13 and pipVSCode, BAS or any IDE
3 | Get a Google API Key
LangChain needs your Google API key and Custom Search Engine (CSE) ID so the agent can legally query Google. Think of it as giving your assistant a badge to enter the web.
Official docs : LangChain already provides a ready‑made Google Search Tool page.
Open the Google Cloud Console.
Select or create a project, and click “Create credentials → API key”. A dialog shows your new key.
Tip: rename the key to something memorable.
Go to the Programmable Search Engine create page, and enter any name, set Search the entire web, and click “Create.”
A screen with an HTML snippet appears.
<script async src=”https://cse.google.com/cse.js?cx=GOOGLE_CSE_ID”>
</script>
<div class=”gcse-search”></div>​
Add both values to your project‑root .env file and keep them out of source code.
GOOGLE_CSE_ID = “your GOOGLE_CSE_ID”
GOOGLE_API_KEY = “your GOOGLE_API_KEY”​
Your notebook can now authenticate to Google when the agent calls google_search!
4 | Add the Google Search Tool to LangChain
We wrap Google’s JSON API inside a LangChain Tool so the agent can call google_search the same way it calls the LLM.
Open the notebook from Part 2, and add the following cell.
# Notebook Cell 3
from langchain.tools import Tool
from langchain_community.utilities import GoogleSearchAPIWrapper
search = GoogleSearchAPIWrapper(k=5) # return top‑5 results
google_tool = Tool.from_function(
name=”google_search”,
description=”Search Google and return the first results”,
func=search.run
)
What exactly happens in this step?
GoogleSearchAPIWrapper: A class that returns a search result string based on the query entered by the agent.Tool.from_function: converts any Python function into a LangChain Tool by attaching a human‑readable name and description.description: Agents refer to the description of each tool to determine when and how to use it.
Think of it as an “instruction manual” for the model.
5 | Build a LangChain Agent
We combine the chat model from Part 2 with the Google tool to form an Agent that can decide on‑the‑fly whether it needs external information.
# Notebook Cell 4
from langchain.agents import initialize_agent, AgentType
agent = initialize_agent(
tools=[google_tool],
llm=chat_llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
)
agent.invoke(“Who won the Tokyo Marathon in 2025, and what was the finishing time?”)
What exactly happens in this step?
initialize_agent: wires your LLM, tools, and optional memory into a single Agent object.tools=[google_tool]: hands the agent a toolbox—right now it contains only Google Search, but you can append more later.agent=AgentType.OPENAI_FUNCTIONS: Since we’re using OpenAI’s LLM in this case, we’ll choose the function calling specification that is compatible with OpenAI models.
Expect to see a printed google_search call, followed by the champion’s name and time!
6 | Challenge – Add the original Tool!
Let’s create an agent that draws omikuji (a Japanese fortune) while following LangChain’s “How to create tools” guide.
# Notebook Cell 5
import random
def omikuji(_: str) -> str:
return random.choice([
“XXXXXX”
])
omikuji_tool = Tool.from_function(
name=”xxxxxx”,
description=”……”,
# ?????
)
omikuji_agent = initialize_agent(
tools=XXXXXXXX,
llm=chat_llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
)
omikuji_agent.invoke(“pick a fortune for me”)
If agent.invoke(“pick a fortune for me”) returns one of the predefined results, it’s a pass!
Have fun customizing!
7 | Next Up
Part 4 RAG Basics â‘ : HANA Cloud VectorEngine & Embedding
In Part 4 we upload PDFs, vectorise them in HANA Cloud Vector Engine, and let the agent answer with your own documents — the first step toward full‑blown RAG.
Disclaimer
All the views and opinions in the blog are my own and is made in my personal capacity and that SAP shall not be responsible or liable for any of the contents published in this blog.
​ Generative AI has broken out of research labs and is now transforming the way business is done. SAP is moving at full speed to embrace this trend and has launched an agent called Joule. In this blog series, I’ll provide a “super-fast hands-on” guide to help you quickly call default models of SAP AI Core and expand them into practical AI agents for real-world business use, so you can understand how these agents work behind the scenes.NoticeThe Japanese version is available here. What You’ll Learn in This SeriesHow to spin up a custom AI agent on SAP AI Core in minutesHands‑on with LangChain, Google Search Tool, RAG, and StreamlitExposing the agent as a REST API and rebuilding the UI in SAPUI5/FioriTime CommitmentEach part is designed to be completed in 10–15 minutes
Series RoadmapPart 0 ProloguePart 1 Env Setup: SAP AICore & AI LaunchpadPart 2 Building a Chat Model with LangChainPart 3 Agent Tools: Integrating Google Search [current blog]Part 4 RAG Basics ①: HANA Cloud VectorEngine & EmbeddingPart 5 RAG Basics ②: Building Retriever ToolPart 6 Streamlit BasicsPart 7 Streamlit UI PrototypePart 8 Expose as a REST APIPart 9 Rebuild the UI with SAPUI5 1st Half Part 10 Rebuild the UI with SAPUI5 2nd HalfIf you enjoyed this post, please give it a kudos! Your support really motivates me. Also, if there’s anything you’d like to know more about, feel free to leave a comment!Agent Tools: Integrating Google Search1 | OverviewAI Agent is more than just chat models. Following the ReAct (Reason + Act) paradigm proposed by Yao et al. in 2022, LLMs are now able to both generate reasoning processes and perform concrete actions, such as calling external tools. For example, by executing actions like Google Search or database queries and then passing those results back to the LLM, the next step of reasoning is always backed by up-to-date information.This approach helps to suppress hallucinations and enables the development of AI Agent that generate much more accurate responses.LangChain the “ReAct agent” pattern is already built‑in. We just register tools (like google_search) and LangChain decides, at runtime, whether the model should think or act. 2 | PrerequisitesBTP sub-accountSAP AI Core instanceSAP AI LaunchPad SubscriptionPython 3.13 and pipVSCode, BAS or any IDE 3 | Get a Google API KeyLangChain needs your Google API key and Custom Search Engine (CSE) ID so the agent can legally query Google. Think of it as giving your assistant a badge to enter the web.Official docs : LangChain already provides a ready‑made Google Search Tool page.Open the Google Cloud Console.Select or create a project, and click “Create credentials → API key”. A dialog shows your new key.Tip: rename the key to something memorable.Go to the Programmable Search Engine create page, and enter any name, set Search the entire web, and click “Create.”A screen with an HTML snippet appears.<script async src=”https://cse.google.com/cse.js?cx=GOOGLE_CSE_ID”>
</script>
<div class=”gcse-search”></div>​Add both values to your project‑root .env file and keep them out of source code.GOOGLE_CSE_ID = “your GOOGLE_CSE_ID”
GOOGLE_API_KEY = “your GOOGLE_API_KEY”​Your notebook can now authenticate to Google when the agent calls google_search! 4 | Add the Google Search Tool to LangChainWe wrap Google’s JSON API inside a LangChain Tool so the agent can call google_search the same way it calls the LLM.Open the notebook from Part 2, and add the following cell.# Notebook Cell 3
from langchain.tools import Tool
from langchain_community.utilities import GoogleSearchAPIWrapper
search = GoogleSearchAPIWrapper(k=5) # return top‑5 results
google_tool = Tool.from_function(
name=”google_search”,
description=”Search Google and return the first results”,
func=search.run
)What exactly happens in this step?GoogleSearchAPIWrapper: A class that returns a search result string based on the query entered by the agent.Tool.from_function: converts any Python function into a LangChain Tool by attaching a human‑readable name and description.description: Agents refer to the description of each tool to determine when and how to use it.Think of it as an “instruction manual” for the model. 5 | Build a LangChain AgentWe combine the chat model from Part 2 with the Google tool to form an Agent that can decide on‑the‑fly whether it needs external information.# Notebook Cell 4
from langchain.agents import initialize_agent, AgentType
agent = initialize_agent(
tools=[google_tool],
llm=chat_llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
)
agent.invoke(“Who won the Tokyo Marathon in 2025, and what was the finishing time?”)What exactly happens in this step?initialize_agent: wires your LLM, tools, and optional memory into a single Agent object.tools=[google_tool]: hands the agent a toolbox—right now it contains only Google Search, but you can append more later.agent=AgentType.OPENAI_FUNCTIONS: Since we’re using OpenAI’s LLM in this case, we’ll choose the function calling specification that is compatible with OpenAI models.Expect to see a printed google_search call, followed by the champion’s name and time! 6 | Challenge – Add the original Tool!Let’s create an agent that draws omikuji (a Japanese fortune) while following LangChain’s “How to create tools” guide.# Notebook Cell 5
import random
def omikuji(_: str) -> str:
return random.choice([
“XXXXXX”
])
omikuji_tool = Tool.from_function(
name=”xxxxxx”,
description=”……”,
# ?????
)
omikuji_agent = initialize_agent(
tools=XXXXXXXX,
llm=chat_llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
)
omikuji_agent.invoke(“pick a fortune for me”) If agent.invoke(“pick a fortune for me”) returns one of the predefined results, it’s a pass!Have fun customizing! 7 | Next UpPart 4 RAG Basics ①: HANA Cloud VectorEngine & EmbeddingIn Part 4 we upload PDFs, vectorise them in HANA Cloud Vector Engine, and let the agent answer with your own documents — the first step toward full‑blown RAG. DisclaimerAll the views and opinions in the blog are my own and is made in my personal capacity and that SAP shall not be responsible or liable for any of the contents published in this blog. Read More Technology Blog Posts by SAP articles
#SAP
#SAPTechnologyblog