Generative AI has broken out of research labs and is now transforming the way business is done. SAP is moving at full speed to embrace this trend and has launched an agent called Joule. In this blog series, Iāll provide a āsuper-fast hands-onā guide to help you quickly call default models ofĀ SAP AI Core and expand them into practical AI agents for real-world business use, so you can understand how these agents work behind the scenes.
Notice
The Japanese version is available here.
Ā
šWhat Youāll Learn in This Series
How to spin up a custom AIāÆagent on SAPāÆAIĀ Core in minutesHandsāon with LangChain, Google Search Tool, RAG, and StreamlitExposing the agent as a RESTĀ API and rebuilding the UI in SAPUI5/Fiori
Time Commitment
Each part is designed to be completed inĀ 10ā15Ā minutes
Ā
šŗļø Series Roadmap
Part 0 ProloguePart 1 Env Setup: SAP AICore & AI LaunchpadPart 2 Building a Chat Model withĀ LangChainĀ [current blog]Part 3 AgentĀ Tools: Integrating GoogleĀ SearchPart 4 RAG Basics ā : HANA Cloud VectorEngine & EmbeddingPart 5 RAG Basics ā”: Building Retriever ToolPart 6 Streamlit BasicsPart 7Ā Streamlit UI PrototypeĀ Part 8 Expose as a RESTĀ APIPart 9 Rebuild theĀ UI withĀ SAPUI5 1st HalfĀ Part 10 Rebuild theĀ UI withĀ SAPUI5 2ndĀ Half
If you enjoyed this post, please give it aĀ kudos! Your support really motivates me. Also, if thereās anything youād like to know more about, feel free to leave a comment!
Building a Chat Model with LangChain
1Ā |Ā Overview
Weāll move into aĀ JupyterĀ Notebook, install the required SDKs, and send our first chat to the GPTā4oāmini deployment we created in PartĀ 1!
Ā
2 | Prerequisites
BTP sub-accountSAP AI Core instanceSAP AI LaunchPad SubscriptionPython 3.13 and pipVSCode, BAS or any IDE
Ā
3Ā | Prepare Local NotebookĀ Env
Open VSĀ Code and create a newĀ JupyterĀ NotebookĀ (langchain_chat.ipynb). Create a fresh folder and (optionally) a PythonĀ virtual environmentĀ so dependencies donāt clash with other projects.
AĀ requirements.txt file is simply a checklist of Python packages (and versions) your notebook needs.Ā
ai_core_sdk>=2.5.7
pydantic==2.9.2
openai>=1.56.0
google-cloud-aiplatform==1.61.0 # Google
boto3==1.35.76 # Amazon
langchain~=0.3.0
langgraph==0.3.30
langchain-community~=0.3.0
langchain-openai>=0.2.14
langchain-google-vertexai==2.0.1
langchain-google-community==2.0.7
langchain-aws==0.2.9
python-dotenv==1.1.0
generative-ai-hub-sdk
Ā
Open a terminalĀ insideĀ your project folderĀ after activating your virtual environmentĀ (e.g.Ā .venv). Then execute the command:
pip install -r requirements.txt
Once the install finishes, restart your Notebook kernel so it picks up the new libraries.
Ā
4Ā | SayĀ Hello toĀ GPTā4oāmini
In a new notebook cell, load your .env and send a prompt.
Read the docs!
Before you type a single line of code, skim theĀ GenerativeĀ AIĀ HubĀ SDK documentationĀ . Notice how the SDK wraps OpenAI, VertexĀ AI, Bedrock, and exposes aĀ LangChainācompatible interface. Keep the version matrix handyāmisāmatched pins are the #1 support ticket.
# ā¶ Notebook Cell 1
from dotenv import load_dotenv
from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
import os
load_dotenv() # Read credentials & DEPLOYMENT_ID
chat_llm = ChatOpenAI(
deployment_id=os.getenv(“DEPLOYMENT_ID”) # ā Part 1 ć§ę§ćć ID
)
messages = [
(“system”, “You are a helpful assistant that translates English to French.”),
(“human”, “Hello, World!”),
]
chat_llm.invoke(messages)ā
āsystemā message defines the modelās role and behaviour. The āhumanā message is the userās inputāreplace it with whatever text comes from your chatĀ UI.
Success = āBonjour, leĀ mondeĀ !ā
5Ā |Ā Challenge āĀ Embed Text like aĀ Pro!
Youāll now deployĀ OpenAIāsĀ text-embedding-ada-002Ā Ā (or any embedding model) in AIĀ Launchpad exactly the same way you deployed GPTā4oāmini. Then call it from LangChain and make sure you receive a vector.
The steps are as follows:
Deploy the embedding model in AIĀ Launchpad.Copy the new DeploymentĀ ID.
NotebookĀ Cell (some fields masked)
# ā¶ Notebook Cell 2
from gen_ai_hub.proxy.langchain.openai import AAAAAAAAAAAA
embedding_model = AAAAAAAAAAAA(
deployment_id=”debXXXXXXXXX”
)
single_vector = embedding_model.BBBBBBBBBBBB(“Hello world”)
print(str(single_vector)[:100]) # Print first 100 chars
Youāre good if you see ā[-0.012,Ā 0.087,Ā ā¦]ā ā a string of numbersļ¼
Ā
6 | NextĀ Up
Part 3 AgentĀ Tools: Integrating GoogleĀ Search
PartĀ 3 upgrades our chat model into aĀ LangChainĀ AgentĀ and bolts on theĀ GoogleĀ Search toolĀ so answers stay fresh. No extra API keysāeverything still runs inside SAPĀ AIĀ Core.
Ā
Disclaimer
All the views and opinions in the blog are my own and is made in my personal capacity and that SAP shall not be responsible or liable for any of the contents published in this blog.
Ā
āĀ Generative AI has broken out of research labs and is now transforming the way business is done. SAP is moving at full speed to embrace this trend and has launched an agent called Joule. In this blog series, Iāll provide a āsuper-fast hands-onā guide to help you quickly call default models ofĀ SAP AI Core and expand them into practical AI agents for real-world business use, so you can understand how these agents work behind the scenes.NoticeThe Japanese version is available here.Ā šWhat Youāll Learn in This SeriesHow to spin up a custom AIāÆagent on SAPāÆAIĀ Core in minutesHandsāon with LangChain, Google Search Tool, RAG, and StreamlitExposing the agent as a RESTĀ API and rebuilding the UI in SAPUI5/FioriTime CommitmentEach part is designed to be completed inĀ 10ā15Ā minutesĀ šŗļø Series RoadmapPart 0 ProloguePart 1 Env Setup: SAP AICore & AI LaunchpadPart 2 Building a Chat Model withĀ LangChainĀ [current blog]Part 3 AgentĀ Tools: Integrating GoogleĀ SearchPart 4 RAG Basics ā : HANA Cloud VectorEngine & EmbeddingPart 5 RAG Basics ā”: Building Retriever ToolPart 6 Streamlit BasicsPart 7Ā Streamlit UI PrototypeĀ Part 8 Expose as a RESTĀ APIPart 9 Rebuild theĀ UI withĀ SAPUI5 1st HalfĀ Part 10 Rebuild theĀ UI withĀ SAPUI5 2ndĀ HalfIf you enjoyed this post, please give it aĀ kudos! Your support really motivates me. Also, if thereās anything youād like to know more about, feel free to leave a comment!Building a Chat Model with LangChain1Ā |Ā OverviewWeāll move into aĀ JupyterĀ Notebook, install the required SDKs, and send our first chat to the GPTā4oāmini deployment we created in PartĀ 1!Ā 2 | PrerequisitesBTP sub-accountSAP AI Core instanceSAP AI LaunchPad SubscriptionPython 3.13 and pipVSCode, BAS or any IDEĀ 3Ā | Prepare Local NotebookĀ EnvOpen VSĀ Code and create a newĀ JupyterĀ NotebookĀ (langchain_chat.ipynb). Create a fresh folder and (optionally) a PythonĀ virtual environmentĀ so dependencies donāt clash with other projects.AĀ requirements.txt file is simply a checklist of Python packages (and versions) your notebook needs.Ā ai_core_sdk>=2.5.7
pydantic==2.9.2
openai>=1.56.0
google-cloud-aiplatform==1.61.0 # Google
boto3==1.35.76 # Amazon
langchain~=0.3.0
langgraph==0.3.30
langchain-community~=0.3.0
langchain-openai>=0.2.14
langchain-google-vertexai==2.0.1
langchain-google-community==2.0.7
langchain-aws==0.2.9
python-dotenv==1.1.0
generative-ai-hub-sdkĀ Open a terminalĀ insideĀ your project folderĀ after activating your virtual environmentĀ (e.g.Ā .venv). Then execute the command:pip install -r requirements.txtOnce the install finishes, restart your Notebook kernel so it picks up the new libraries.Ā 4Ā | SayĀ Hello toĀ GPTā4oāminiIn a new notebook cell, load your .env and send a prompt.Read the docs!Before you type a single line of code, skim theĀ GenerativeĀ AIĀ HubĀ SDK documentationĀ . Notice how the SDK wraps OpenAI, VertexĀ AI, Bedrock, and exposes aĀ LangChainācompatible interface. Keep the version matrix handyāmisāmatched pins are the #1 support ticket.Add the following cell.# ā¶ Notebook Cell 1
from dotenv import load_dotenv
from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
import os
load_dotenv() # Read credentials & DEPLOYMENT_ID
chat_llm = ChatOpenAI(
deployment_id=os.getenv(“DEPLOYMENT_ID”) # ā Part 1 ć§ę§ćć ID
)
messages = [
(“system”, “You are a helpful assistant that translates English to French.”),
(“human”, “Hello, World!”),
]
chat_llm.invoke(messages)āāsystemā message defines the modelās role and behaviour. The āhumanā message is the userās inputāreplace it with whatever text comes from your chatĀ UI.Success = āBonjour, leĀ mondeĀ !āĀ 5Ā |Ā Challenge āĀ Embed Text like aĀ Pro!Youāll now deployĀ OpenAIāsĀ text-embedding-ada-002Ā Ā (or any embedding model) in AIĀ Launchpad exactly the same way you deployed GPTā4oāmini. Then call it from LangChain and make sure you receive a vector.The steps are as follows:Deploy the embedding model in AIĀ Launchpad.Copy the new DeploymentĀ ID.NotebookĀ Cell (some fields masked)# ā¶ Notebook Cell 2
from gen_ai_hub.proxy.langchain.openai import AAAAAAAAAAAA
embedding_model = AAAAAAAAAAAA(
deployment_id=”debXXXXXXXXX”
)
single_vector = embedding_model.BBBBBBBBBBBB(“Hello world”)
print(str(single_vector)[:100]) # Print first 100 charsYouāre good if you see ā[-0.012,Ā 0.087,Ā ā¦]ā ā a string of numbersļ¼Ā 6 | NextĀ UpPart 3 AgentĀ Tools: Integrating GoogleĀ SearchPartĀ 3 upgrades our chat model into aĀ LangChainĀ AgentĀ and bolts on theĀ GoogleĀ Search toolĀ so answers stay fresh. No extra API keysāeverything still runs inside SAPĀ AIĀ Core.Ā DisclaimerAll the views and opinions in the blog are my own and is made in my personal capacity and that SAP shall not be responsible or liable for any of the contents published in this blog.Ā Ā Ā Read MoreĀ Technology Blog Posts by SAP articlesĀ
#SAP
#SAPTechnologyblog