How to Integrate SAP AI Core with LangChain Using Generative AI Hub SDK

Estimated read time 4 min read

Japanese version

Introduction

In this blog, you will learn how to use the LangChain wrapper of the Generative AI Hub SDK to call LLMs (such as OpenAI or Google) provided on SAP AI Core from Python.

Note: This guide assumes that your AI Core setup has already been completed.

1. Environment Setup

First, create an AI Core service key from your BTP subaccount.

Then, download the service key.

Check the downloaded file (e.g., ai-core.txt).

Next, store the obtained service key information in a .env file.

In a real development environment, make sure to use .gitignore to avoid committing secrets.

# Configure from the downloaded .txt file
AICORE_BASE_URL={serviceurls.AI_API_URL + ‘/v2’}
AICORE_CLIENT_ID={clientid}
AICORE_CLIENT_SECRET={clientsecret}
AICORE_AUTH_URL={url}

# Set AI Core resource group
AICORE_RESOURCE_GROUP=default

Note: Make sure to append /v2 at the end of AICORE_BASE_URL.

ex. https://api.ai.***.cfapps.sap.hana.ondemand.com/v2 

2. Install Required Packages

Add the following to your requirements.txt:

generative-ai-hub-sdk[all]
python-dotenv

The generative-ai-hub-sdk[all] package supports LangChain and multiple LLM providers (OpenAI, Amazon, Google, etc.).

Then, install the packages using:

pip install -r requirements.txt

3. Implementation

Once your environment variables and packages are ready, create a main.py file:

from dotenv import load_dotenv
from gen_ai_hub.proxy.langchain.init_models import init_llm
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

def main():
load_dotenv()

try:
prompt = ChatPromptTemplate.from_messages([
(“system”,”You are a helpful assistant”),
(“user”,”{input}”)
])
llm = init_llm(“gpt-4o”, max_tokens=300)
chain = prompt | llm | StrOutputParser()
response = chain.invoke({“input”: “Hello, how are you?”})
print(f”Response: {response}”)

except Exception as e:
print(f”Error occurred: {e}”)
return

if __name__ == “__main__”:
main()

Here, the Generative AI Hub SDK is only used in the init_llm part, while the rest to invoke a LLM is standard LangChain code.

4. Run the Script

To run the script, execute:

python main.py

You should see a response like:
“Hello! I’m here and ready to assist you. How can I help you today?”

Summary

By following these steps, you can quickly call LLMs using the Generative AI Hub SDK and LangChain.

This guide should help you get started with your implementation!

References

SAP Cloud SDK for AI (Python) 

​ Japanese versionIntroductionIn this blog, you will learn how to use the LangChain wrapper of the Generative AI Hub SDK to call LLMs (such as OpenAI or Google) provided on SAP AI Core from Python.Note: This guide assumes that your AI Core setup has already been completed.1. Environment SetupFirst, create an AI Core service key from your BTP subaccount.Then, download the service key.Check the downloaded file (e.g., ai-core.txt).Next, store the obtained service key information in a .env file.In a real development environment, make sure to use .gitignore to avoid committing secrets.# Configure from the downloaded .txt file
AICORE_BASE_URL={serviceurls.AI_API_URL + ‘/v2’}
AICORE_CLIENT_ID={clientid}
AICORE_CLIENT_SECRET={clientsecret}
AICORE_AUTH_URL={url}

# Set AI Core resource group
AICORE_RESOURCE_GROUP=defaultNote: Make sure to append /v2 at the end of AICORE_BASE_URL.ex. https://api.ai.***.cfapps.sap.hana.ondemand.com/v2 2. Install Required PackagesAdd the following to your requirements.txt:generative-ai-hub-sdk[all]
python-dotenvThe generative-ai-hub-sdk[all] package supports LangChain and multiple LLM providers (OpenAI, Amazon, Google, etc.).Then, install the packages using:pip install -r requirements.txt3. ImplementationOnce your environment variables and packages are ready, create a main.py file:from dotenv import load_dotenv
from gen_ai_hub.proxy.langchain.init_models import init_llm
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

def main():
load_dotenv()

try:
prompt = ChatPromptTemplate.from_messages([
(“system”,”You are a helpful assistant”),
(“user”,”{input}”)
])
llm = init_llm(“gpt-4o”, max_tokens=300)
chain = prompt | llm | StrOutputParser()
response = chain.invoke({“input”: “Hello, how are you?”})
print(f”Response: {response}”)

except Exception as e:
print(f”Error occurred: {e}”)
return

if __name__ == “__main__”:
main()Here, the Generative AI Hub SDK is only used in the init_llm part, while the rest to invoke a LLM is standard LangChain code.4. Run the ScriptTo run the script, execute:python main.pyYou should see a response like:”Hello! I’m here and ready to assist you. How can I help you today?”SummaryBy following these steps, you can quickly call LLMs using the Generative AI Hub SDK and LangChain.This guide should help you get started with your implementation!ReferencesSAP Cloud SDK for AI (Python)   Read More Technology Blog Posts by SAP articles 

#SAP

#SAPTechnologyblog

You May Also Like

More From Author