Generative AI has broken out of research labs and is now transforming the way business is done. SAP is moving at full speed to embrace this trend and has launched an agent called Joule. In this blog series, Iâll provide a âsuper-fast hands-onâ guide to help you quickly call default models of SAP AI Core and expand them into practical AI agents for real-world business use, so you can understand how these agents work behind the scenes.
Notice
The Japanese version is available here.
What Youâll Learn in This Series
How to spin up a custom AIâŻagent on SAPâŻAI Core in minutesHandsâon with LangChain, Google Search Tool, RAG, and StreamlitExposing the agent as a REST API and rebuilding the UI in SAPUI5/Fiori
Time Commitment
Each part is designed to be completed in 10â15 minutes
ïž Series Roadmap
Part 0 ProloguePart 1 Env Setup: SAP AICore & AI LaunchpadPart 2 Building a Chat Model with LangChain [current blog]Part 3 Agent Tools: Integrating Google SearchPart 4 RAG Basics â : HANA Cloud VectorEngine & EmbeddingPart 5 RAG Basics âĄ: Building Retriever ToolPart 6 Streamlit BasicsPart 7 Streamlit UI Prototype Part 8 Expose as a REST APIPart 9 Rebuild the UI with SAPUI5 1st Half Part 10 Rebuild the UI with SAPUI5 2nd Half
If you enjoyed this post, please give it a kudos! Your support really motivates me. Also, if thereâs anything youâd like to know more about, feel free to leave a comment!
Building a Chat Model with LangChain
1 | Overview
Weâll move into a Jupyter Notebook, install the required SDKs, and send our first chat to the GPTâ4oâmini deployment we created in Part 1!
2 | Prerequisites
BTP sub-accountSAP AI Core instanceSAP AI LaunchPad SubscriptionPython 3.13 and pipVSCode, BAS or any IDE
3 | Prepare Local Notebook Env
Open VS Code and create a new Jupyter Notebook (langchain_chat.ipynb). Create a fresh folder and (optionally) a Python virtual environment so dependencies donât clash with other projects.
A requirements.txt file is simply a checklist of Python packages (and versions) your notebook needs.
ai_core_sdk>=2.5.7
pydantic==2.9.2
openai>=1.56.0
google-cloud-aiplatform==1.61.0 # Google
boto3==1.35.76 # Amazon
langchain~=0.3.0
langgraph==0.3.30
langchain-community~=0.3.0
langchain-openai>=0.2.14
langchain-google-vertexai==2.0.1
langchain-google-community==2.0.7
langchain-aws==0.2.9
python-dotenv==1.1.0
generative-ai-hub-sdk
Open a terminal inside your project folder after activating your virtual environment (e.g. .venv). Then execute the command:
pip install -r requirements.txt
Once the install finishes, restart your Notebook kernel so it picks up the new libraries.
4 | Say Hello to GPTâ4oâmini
In a new notebook cell, load your .env and send a prompt.
Read the docs!
Before you type a single line of code, skim the Generative AI Hub SDK documentation . Notice how the SDK wraps OpenAI, Vertex AI, Bedrock, and exposes a LangChainâcompatible interface. Keep the version matrix handyâmisâmatched pins are the #1 support ticket.
# Notebook Cell 1
from dotenv import load_dotenv
from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
import os
load_dotenv() # Read credentials & DEPLOYMENT_ID
chat_llm = ChatOpenAI(
deployment_id=os.getenv(âDEPLOYMENT_IDâ) # â Part 1 ă§æ§ăă ID
)
messages = [
(âsystemâ, âYou are a helpful assistant that translates English to French.â),
(âhumanâ, âHello, World!â),
]
chat_llm.invoke(messages)â
âsystemâ message defines the modelâs role and behaviour. The âhumanâ message is the userâs inputâreplace it with whatever text comes from your chat UI.
Success = âBonjour, le monde !â
5 | Challenge â Embed Text like a Pro!
Youâll now deploy OpenAIâs text-embedding-ada-002 (or any embedding model) in AI Launchpad exactly the same way you deployed GPTâ4oâmini. Then call it from LangChain and make sure you receive a vector.
The steps are as follows:
Deploy the embedding model in AI Launchpad.Copy the new Deployment ID.
Notebook Cell (some fields masked)
# Notebook Cell 2
from gen_ai_hub.proxy.langchain.openai import AAAAAAAAAAAA
embedding_model = AAAAAAAAAAAA(
deployment_id=âdebXXXXXXXXXâ
)
single_vector = embedding_model.BBBBBBBBBBBB(âHello worldâ)
print(str(single_vector)[:100]) # Print first 100 chars
Youâre good if you see â[-0.012, 0.087, âŠ]â â a string of numbersïŒ
6 | Next Up
Part 3 Agent Tools: Integrating Google Search
Part 3 upgrades our chat model into a LangChain Agent and bolts on the Google Search tool so answers stay fresh. No extra API keysâeverything still runs inside SAP AI Core.
Disclaimer
All the views and opinions in the blog are my own and is made in my personal capacity and that SAP shall not be responsible or liable for any of the contents published in this blog.
â Generative AI has broken out of research labs and is now transforming the way business is done. SAP is moving at full speed to embrace this trend and has launched an agent called Joule. In this blog series, Iâll provide a âsuper-fast hands-onâ guide to help you quickly call default models of SAP AI Core and expand them into practical AI agents for real-world business use, so you can understand how these agents work behind the scenes.NoticeThe Japanese version is available here. What Youâll Learn in This SeriesHow to spin up a custom AIâŻagent on SAPâŻAI Core in minutesHandsâon with LangChain, Google Search Tool, RAG, and StreamlitExposing the agent as a REST API and rebuilding the UI in SAPUI5/FioriTime CommitmentEach part is designed to be completed in 10â15 minutes
Series RoadmapPart 0 ProloguePart 1 Env Setup: SAP AICore & AI LaunchpadPart 2 Building a Chat Model with LangChain [current blog]Part 3 Agent Tools: Integrating Google SearchPart 4 RAG Basics â : HANA Cloud VectorEngine & EmbeddingPart 5 RAG Basics âĄ: Building Retriever ToolPart 6 Streamlit BasicsPart 7 Streamlit UI Prototype Part 8 Expose as a REST APIPart 9 Rebuild the UI with SAPUI5 1st Half Part 10 Rebuild the UI with SAPUI5 2nd HalfIf you enjoyed this post, please give it a kudos! Your support really motivates me. Also, if thereâs anything youâd like to know more about, feel free to leave a comment!Building a Chat Model with LangChain1 | OverviewWeâll move into a Jupyter Notebook, install the required SDKs, and send our first chat to the GPTâ4oâmini deployment we created in Part 1! 2 | PrerequisitesBTP sub-accountSAP AI Core instanceSAP AI LaunchPad SubscriptionPython 3.13 and pipVSCode, BAS or any IDE 3 | Prepare Local Notebook EnvOpen VS Code and create a new Jupyter Notebook (langchain_chat.ipynb). Create a fresh folder and (optionally) a Python virtual environment so dependencies donât clash with other projects.A requirements.txt file is simply a checklist of Python packages (and versions) your notebook needs. ai_core_sdk>=2.5.7
pydantic==2.9.2
openai>=1.56.0
google-cloud-aiplatform==1.61.0 # Google
boto3==1.35.76 # Amazon
langchain~=0.3.0
langgraph==0.3.30
langchain-community~=0.3.0
langchain-openai>=0.2.14
langchain-google-vertexai==2.0.1
langchain-google-community==2.0.7
langchain-aws==0.2.9
python-dotenv==1.1.0
generative-ai-hub-sdk Open a terminal inside your project folder after activating your virtual environment (e.g. .venv). Then execute the command:pip install -r requirements.txtOnce the install finishes, restart your Notebook kernel so it picks up the new libraries. 4 | Say Hello to GPTâ4oâminiIn a new notebook cell, load your .env and send a prompt.Read the docs!Before you type a single line of code, skim the Generative AI Hub SDK documentation . Notice how the SDK wraps OpenAI, Vertex AI, Bedrock, and exposes a LangChainâcompatible interface. Keep the version matrix handyâmisâmatched pins are the #1 support ticket.Add the following cell.# Notebook Cell 1
from dotenv import load_dotenv
from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
import os
load_dotenv() # Read credentials & DEPLOYMENT_ID
chat_llm = ChatOpenAI(
deployment_id=os.getenv(âDEPLOYMENT_IDâ) # â Part 1 ă§æ§ăă ID
)
messages = [
(âsystemâ, âYou are a helpful assistant that translates English to French.â),
(âhumanâ, âHello, World!â),
]
chat_llm.invoke(messages)ââsystemâ message defines the modelâs role and behaviour. The âhumanâ message is the userâs inputâreplace it with whatever text comes from your chat UI.Success = âBonjour, le monde !â 5 | Challenge â Embed Text like a Pro!Youâll now deploy OpenAIâs text-embedding-ada-002 (or any embedding model) in AI Launchpad exactly the same way you deployed GPTâ4oâmini. Then call it from LangChain and make sure you receive a vector.The steps are as follows:Deploy the embedding model in AI Launchpad.Copy the new Deployment ID.Notebook Cell (some fields masked)# Notebook Cell 2
from gen_ai_hub.proxy.langchain.openai import AAAAAAAAAAAA
embedding_model = AAAAAAAAAAAA(
deployment_id=âdebXXXXXXXXXâ
)
single_vector = embedding_model.BBBBBBBBBBBB(âHello worldâ)
print(str(single_vector)[:100]) # Print first 100 charsYouâre good if you see â[-0.012, 0.087, âŠ]â â a string of numbersïŒ 6 | Next UpPart 3 Agent Tools: Integrating Google SearchPart 3 upgrades our chat model into a LangChain Agent and bolts on the Google Search tool so answers stay fresh. No extra API keysâeverything still runs inside SAP AI Core. DisclaimerAll the views and opinions in the blog are my own and is made in my personal capacity and that SAP shall not be responsible or liable for any of the contents published in this blog. Read More Technology Blog Posts by SAP articles
#SAP
#SAPTechnologyblog