Introduction
As a part of Intelligent Product Recommendation team we are collaborating with Microsoft to use Autogen for agentic usecases. I initially came across a helpful post AutoGen with SAP AI Core by @lars_gregori , which explores the integration of autogen using pyautogen and llm_config. However, use of llm_config is deprecated in v0.6.1 of AutoGen. This blog will guide you how to integrate AutoGen with SAP AI Core with the help of AzureOpenAIChatCompletionClient .
Background
Currently, IPR product is already using python generative-ai-hub-sdk to call the LLM which takes in the ‘AI-Resource-Group’ as the required header and the model name. Now we have came accross a critical business usecase that requires the use of Autogen to implement the reflection pattern, which led me to explore OpenAIChatCompletionClient class of Autogen library.
I tried using OpenAIChatCompletionClient by passing base_url and token which returned 404 resource not found, since all SAP AI Core requests redirect to Azure OpenAI platform that requires api_version as a mandatory query parameter. But OpenAIChatCompletionClient class doesn’t provides api_version to the LLM while calling it. After further deep dive I figured out AzureOpenAIChatCompletionClient, which allows requests to be redirected successfully via SAP AI Core to Azure OpenAI by passing the api_version.
Reason to use AzureOpenAIChatCompletionClient
OpenAIChatCompletionClient cannot be used directly.
Instead, we need:
base_url: The model deployment URL from SAP AI Core.
api_key: A bearer token from GenAI Hub (via SAP AI Core).
api_version: Required by Azure for all chat completions.
Additional headers like AI-Resource-Group.
Code Sample
Below code sample shows how we can get deployment url and auth token using generative-ai-hub-sdk, that can be passed to AzueOpenAIChatCompletionClient along with api_version.
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import AzureOpenAIChatCompletionClient
from autogen_core.models import ModelFamily
from gen_ai_hub.proxy import GenAIHubProxyClient
from ai_core_sdk.ai_core_v2_client import AICoreV2Client
from autogen_agentchat.ui import Console
import asyncio
# Fetch deployment URL from SAP AI Core
def get_deployment_url():
try:
ai_core_client = AICoreV2Client.from_env()
resources = ai_core_client.deployment.query(resource_group=”default”, scenario_id=”foundation-models”).resources
return resources[0].deployment_url if resources else None
except Exception:
return None
# Fetch token and prepare model client
base_url = get_deployment_url()
gen_ai_hub_proxy_client = GenAIHubProxyClient(resource_group=’default’)
token = gen_ai_hub_proxy_client.get_ai_core_token().replace(“Bearer “, “”)
model_client = AzureOpenAIChatCompletionClient(
model=”gpt-4.1″,
base_url=base_url,
api_key=token,
default_headers={‘AI-Resource-Group’: ‘default’},
model_info={
“family”: ModelFamily.GPT_41,
“vision”: False,
“function_calling”: True,
“json_output”: False
},
api_version=”2023-05-15″ // api_version as per SAP AI Core documentation
)
# Tool example
async def get_weather(city: str) -> str:
return f”The weather in {city} is 73 degrees and Sunny.”
# Define the agent
agent = AssistantAgent(
name=”weather_agent”,
model_client=model_client,
tools=[get_weather],
system_message=”You are a weather assistant. Give complete details about the weather in the city requested by the user.”
)
# Run the agent
async def main() -> None:
await Console(agent.run_stream(task=”What is the weather in New York?”))
await model_client.close()
asyncio.run(main())
Conclusion
Above example is referred from Autogen documentation. For api_version, ChatCompletionClient can be referred from SAP AI Core documentation. Output from above code:
———- TextMessage (user) ———-
What is the weather in New York?
———- ToolCallRequestEvent (weather_agent) ———-
[FunctionCall(id=’call_WJfl63g7skHT3dWnxMCwNtCr’, arguments='{“city”:”New York”}’, name=’get_weather’)]
———- ToolCallExecutionEvent (weather_agent) ———-
[FunctionExecutionResult(content=’The weather in New York is 73 degrees and Sunny.’, name=’get_weather’, call_id=’call_WJfl63g7skHT3dWnxMCwNtCr’, is_error=False)]
———- ToolCallSummaryMessage (weather_agent) ———-
The weather in New York is 73 degrees and Sunny.
I hope this post helps other teams building solutions on SAP AI Core to successfully integrate newer versions of AutoGen to call the LLM and build multiple AI agents.
Elevate with AI !!
IntroductionAs a part of Intelligent Product Recommendation team we are collaborating with Microsoft to use Autogen for agentic usecases. I initially came across a helpful post AutoGen with SAP AI Core by @lars_gregori , which explores the integration of autogen using pyautogen and llm_config. However, use of llm_config is deprecated in v0.6.1 of AutoGen. This blog will guide you how to integrate AutoGen with SAP AI Core with the help of AzureOpenAIChatCompletionClient .BackgroundCurrently, IPR product is already using python generative-ai-hub-sdk to call the LLM which takes in the ‘AI-Resource-Group’ as the required header and the model name. Now we have came accross a critical business usecase that requires the use of Autogen to implement the reflection pattern, which led me to explore OpenAIChatCompletionClient class of Autogen library.I tried using OpenAIChatCompletionClient by passing base_url and token which returned 404 resource not found, since all SAP AI Core requests redirect to Azure OpenAI platform that requires api_version as a mandatory query parameter. But OpenAIChatCompletionClient class doesn’t provides api_version to the LLM while calling it. After further deep dive I figured out AzureOpenAIChatCompletionClient, which allows requests to be redirected successfully via SAP AI Core to Azure OpenAI by passing the api_version.Reason to use AzureOpenAIChatCompletionClientOpenAIChatCompletionClient cannot be used directly.Instead, we need:base_url: The model deployment URL from SAP AI Core.api_key: A bearer token from GenAI Hub (via SAP AI Core).api_version: Required by Azure for all chat completions.Additional headers like AI-Resource-Group.Code SampleBelow code sample shows how we can get deployment url and auth token using generative-ai-hub-sdk, that can be passed to AzueOpenAIChatCompletionClient along with api_version.from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import AzureOpenAIChatCompletionClient
from autogen_core.models import ModelFamily
from gen_ai_hub.proxy import GenAIHubProxyClient
from ai_core_sdk.ai_core_v2_client import AICoreV2Client
from autogen_agentchat.ui import Console
import asyncio
# Fetch deployment URL from SAP AI Core
def get_deployment_url():
try:
ai_core_client = AICoreV2Client.from_env()
resources = ai_core_client.deployment.query(resource_group=”default”, scenario_id=”foundation-models”).resources
return resources[0].deployment_url if resources else None
except Exception:
return None
# Fetch token and prepare model client
base_url = get_deployment_url()
gen_ai_hub_proxy_client = GenAIHubProxyClient(resource_group=’default’)
token = gen_ai_hub_proxy_client.get_ai_core_token().replace(“Bearer “, “”)
model_client = AzureOpenAIChatCompletionClient(
model=”gpt-4.1″,
base_url=base_url,
api_key=token,
default_headers={‘AI-Resource-Group’: ‘default’},
model_info={
“family”: ModelFamily.GPT_41,
“vision”: False,
“function_calling”: True,
“json_output”: False
},
api_version=”2023-05-15″ // api_version as per SAP AI Core documentation
)
# Tool example
async def get_weather(city: str) -> str:
return f”The weather in {city} is 73 degrees and Sunny.”
# Define the agent
agent = AssistantAgent(
name=”weather_agent”,
model_client=model_client,
tools=[get_weather],
system_message=”You are a weather assistant. Give complete details about the weather in the city requested by the user.”
)
# Run the agent
async def main() -> None:
await Console(agent.run_stream(task=”What is the weather in New York?”))
await model_client.close()
asyncio.run(main())ConclusionAbove example is referred from Autogen documentation. For api_version, ChatCompletionClient can be referred from SAP AI Core documentation. Output from above code:———- TextMessage (user) ———-What is the weather in New York?———- ToolCallRequestEvent (weather_agent) ———-[FunctionCall(id=’call_WJfl63g7skHT3dWnxMCwNtCr’, arguments='{“city”:”New York”}’, name=’get_weather’)]———- ToolCallExecutionEvent (weather_agent) ———-[FunctionExecutionResult(content=’The weather in New York is 73 degrees and Sunny.’, name=’get_weather’, call_id=’call_WJfl63g7skHT3dWnxMCwNtCr’, is_error=False)]———- ToolCallSummaryMessage (weather_agent) ———-The weather in New York is 73 degrees and Sunny.I hope this post helps other teams building solutions on SAP AI Core to successfully integrate newer versions of AutoGen to call the LLM and build multiple AI agents.Elevate with AI !! Read More Technology Blog Posts by SAP articles
#SAP
#SAPTechnologyblog