In our previous blog “Building an Agentic AI System with SAP Generative AI Hub“, we explored how to build an agentic AI system using SAP Generative AI Hub and LangChain. That solution demonstrated how LLMs can dynamically decide which tools to invoke in order to respond intelligently to complex, multi-part questions. While the core logic of tool-based orchestration worked well, the implementation was still tightly coupled to our Python application logic.
In this blog, we take the next step toward standardization and enterprise-grade extensibility by introducing the Model Context Protocol (MCP). MCP defines a communication protocol for tool-augmented AI systems, separating host applications (which orchestrate tool usage) from tool servers (which expose specific capabilities through the standardized Model Context Protocol). This allows for a more modular and scalable architecture.
Here, we’ll show how we’ve implemented a custom MCP server using Python and deployed it on SAP BTP Kyma Runtime. By doing so, we leverage Kubernetes-native scalability and maintain a clear separation between our tool infrastructure and the LLM client logic. The result: we maintain the same dynamic capabilities from our first blog, but now, they’re exposed as MCP-compliant services running in the cloud.
Let’s explore what this looks like, how it works in action, and how it sets the stage for even more advanced AI orchestration scenarios.
Architecture Overview of the MCP-Based Agentic AI System
To bring modularity, flexibility, and enterprise-grade scale to our agentic AI system, we’ve adopted a Model Context Protocol (MCP) architecture. The diagram below captures the key components and how they interact within our deployment landscape on SAP BTP Kyma Runtime.
At the heart of this solution is the MCP Client, which runs as a Python-based AI application. This component is responsible for:
Sending user prompts,
Interacting with the LLM (e.g., GPT-4o via SAP GenAI Hub),
And deciding which tools to invoke based on tool metadata.
But instead of implementing tools directly in the client/host, the tools are offloaded to a dedicated MCP Server, deployed as a container in Kyma.
Developing the MCP Server to Expose AI Tools
In this section, we explore the server-side implementation of our Agentic AI system using the Model Context Protocol (MCP). The server is implemented in Python using the FastMCP utility, a lightweight and spec-compliant server framework that enables tool registration and execution over Server-Sent Events (SSE). This setup allows any LLM orchestration layer to dynamically discover and invoke tools exposed by this server.
The MCP server in this blog is deployed on SAP BTP Kyma and exposes three callable tools: get_weather, get_time_now, and retriever (a Vector Engine powered by SAP HANA Cloud). These tools extend the capabilities of the connected LLM client by providing access to real-time and domain-specific information.
Server Initialization
The server is instantiated using:
mcp = FastMCP(name=”SAP”, host=”0.0.0.1″, port=8050)
This blog focuses on non-productive usage of the MCP architecture; therefore, it is strongly recommended to review and address security considerations before applying this approach in any production environment.
Intelligent Tooling via Declarative Registration
Each tool is declared using the mcp.tool decorator, which handles registration, schema extraction, and tool metadata exposure:
Tool: retriever
Connects to SAP HANA Cloud’s vector engine to perform vector search over enterprise documents. Powered by LangChain and OpenAI-compatible embeddings via SAP AI Core and GenAI Hub, this tool retrieves relevant content and uses an LLM to generate answers.
Tool: get_time_now
Provides the current local server time, useful in temporal reasoning or grounding responses.
Tool: get_weather
Calls a public API to retrieve current weather data for a specified latitude and longitude.
All tools follow the MCP specification for discovery (listTools) and execution (callTool), enabling zero-friction orchestration from the client side.
Starting the MCP Loop
At the end of the script, the server is started via:
if __name__ == “__main__”:
mcp.run(transport=”sse”)
This activates the MCP runtime, opening an SSE-based stream endpoint for orchestration. As per the 2024-11 MCP specification, this includes:
Persistent connection over /sse for streaming events.
Session management (using session_id).
Bidirectional tool call handling via JSON-RPC.
The server then becomes fully compatible with any compliant MCP client, such as a LangChain-based AgentExecutor or a custom orchestrator built using SAP Generative AI Hub’s SDK.This MCP server represents a modular, extensible, and production-ready approach to exposing tool-based intelligence within enterprise AI systems. By leveraging SAP BTP Kyma for scalability and the MCP standard for interoperability, the architecture decouples tool logic from LLM reasoning.
Deploying the MCP Server to SAP BTP Kyma Runtime
After verifying your MCP server works as expected in a local environment, the next logical step is to deploy it to a scalable, secure, and cloud-native runtime. SAP BTP’s Kyma runtime provides the perfect landing zone for containerized AI microservices, especially when you need full Kubernetes flexibility combined with native SAP integration.
Why Kyma on SAP BTP?
SAP BTP Kyma runtime is a managed Kubernetes offering built on the open-source Kyma framework. It lets you run containers, expose APIs with secure gateways, and interact with SAP services and extensions through eventing and service bindings. In our use case, Kyma is good approach for hosting the MCP Server so it can dynamically serve tools like weather APIs, real-time clocks, or vector search over SAP HANA Cloud. However, MCP servers can be deployed on any infrastructure without compromising the protocol’s modular architecture or capabilities.
Containerizing the MCP Server with Docker
To containerize the MCP server, we define a lightweight Dockerfile based on python:3.11-slim and use uv for efficient dependency management. The file below installs the dependencies, copies the server script, and runs the MCP Server on port 8050.
FROM python:3.11-slim
WORKDIR /app
# Install uv for faster package management
RUN pip install uv
# Copy requirements file
COPY requirements.txt .
# Install dependencies using uv
RUN uv venv && uv pip install –no-cache-dir -r requirements.txt
# Copy application code
COPY server.py .
# Expose the port the server runs on
EXPOSE 8050
# Command to run the server
CMD [“uv”, “run”, “server.py”]
This Dockerfile ensures a fast, clean image build optimized for Kyma deployment.
Pushing to Docker Hub
Once the image is built, you’ll want to push it to Docker Hub (or any container registry accessible to your Kyma cluster). If you’re using a Mac or local ARM-based system but deploying to Azure (x86_64), you’ll need to explicitly build the image for the correct architecture.
docker buildx build –platform linux/amd64 -t carlosbasto/mcp-server:latest –push .
The use of –platform linux/amd64 ensures compatibility with Kyma’s underlying nodes (which typically run on Intel-based VMs in Azure). This avoids runtime errors due to architecture mismatches when the image is pulled and executed in the cluster.
Once pushed, your MCP server container is ready for Kyma.
Creating secrets.yaml – Secure Environment Configuration for the MCP Server
Before deploying your MCP Server to SAP BTP Kyma, it’s critical to configure all runtime credentials and sensitive values securely. Hardcoding such details directly into your container image or deployment files would pose both security and maintainability risks.
Kubernetes provides a native mechanism for managing these configurations through Secrets, enabling secure injection of credentials and tokens into your pods at runtime.
Below is the secrets.yaml file used to configure the MCP Server environment in a secure and decoupled way:
apiVersion: v1
kind: Secret
metadata:
name: mcp-env-secrets
type: Opaque
stringData:
TRANSPORT: “<insert your string here>”
HANA_HOST: “<insert your string here>”
HANA_USER: “<insert your string here>”
HANA_PASSWORD: “<insert your string here>”
AICORE_CLIENT_ID: “<insert your string here>”
AICORE_CLIENT_SECRET: “<insert your string here>”
AICORE_AUTH_URL: “<insert your string here>”
AICORE_BASE_URL: “<insert your string here>”
AICORE_RESOURCE_GROUP: “<insert your string here>”
type: Opaque
This is the standard Kubernetes secret type for storing arbitrary key-value pairs that don’t follow a specific format (e.g., TLS or Docker config).
stringData vs data
stringData allows you to input plain-text values. These will be automatically base64-encoded by Kubernetes and stored securely in the cluster.
Sensitive Variables Explained
HANA_HOST, HANA_USER, HANA_PASSWORD: Credentials used by the retriever tool to connect to the SAP HANA vector store.
AICORE_*: Credentials and endpoints for authenticating with SAP AI Core or the SAP Generative AI Hub, required for embedding generation and LLM inference (e.g. Orchestration).
TRANSPORT: (Optional) Can be used to control runtime behavior, such as selecting between “sse” or “http” transport modes when running the MCP server.
Link this secret to your Deployment.yaml using the envFrom.secretRef field. This ensures the MCP Server automatically receives all necessary environment variables without exposing them directly in the container image or repository.
Defining the MCP Server Deployment in Kyma
Once your Docker image is ready and your secrets are securely stored in the cluster, the next step is to define how the MCP Server should run within SAP BTP’s Kyma runtime. This is done using a Kubernetes Deployment, which ensures your server is reliably started and managed within the cluster.
Below is your actual deployment.yaml along with a breakdown of its key components:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcp-server
labels:
app: mcp-server
spec:
replicas: 1
selector:
matchLabels:
app: mcp-server
template:
metadata:
labels:
app: mcp-server
env: prod
owner: carlosbasto
spec:
containers:
– name: mcp-server
image: carlosbasto/mcp-server:latest
ports:
– containerPort: 8050
envFrom:
– secretRef:
name: mcp-env-secrets
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
Deployment Type
This YAML file defines a Deployment, which is a Kubernetes controller used to manage stateless workloads. It ensures that the desired number of MCP server replicas are running at all times and supports auto-restart on failure or node rescheduling.
Metadata and Labels
name: mcp-server: Defines the unique deployment name.
labels: These are used to group, identify, and select this deployment (e.g., for services or monitoring).
Custom labels like env: prod and owner: carlosbasto help with environment tracking and cost attribution.
Replica Count
replicas: 1: Specifies that one instance (pod) of the MCP server should run. You can easily scale this to multiple replicas later for high availability or horizontal scaling.
Pod Selector and Template
matchLabels: app: mcp-server: Ensures this deployment manages pods with the corresponding label.
The template section defines the actual pod spec (containers, ports, envs, etc.).
Container Definition
image: carlosbasto/mcp-server:latest: References the MCP server image pushed to Docker Hub.
containerPort: 8050: The server listens on port 8050; this should match the port exposed in your Dockerfile.
Environment Variables via Secrets
envFrom.secretRef.name: mcp-env-secrets: Injects all the key-value pairs defined in your secrets.yaml as environment variables in the container—safely and declaratively.
Resource Management
requests: Reserves a guaranteed baseline of 250 millicores CPU and 256Mi of memory.
limits: Caps the container at 500 millicores CPU and 512Mi memory to ensure predictable usage.
This configuration ensures your MCP server runs in a secure, lightweight, and production-ready manner within the Kyma environment. Up next, you’ll define how to expose this server externally using a Kubernetes Service and APIRule.
Creating the Kubernetes Service: Exposing the MCP Server Internally
Once the MCP Server is running inside a Kyma-managed Kubernetes pod, the next step is to make it discoverable and accessible within the cluster. This is where a Kubernetes Service comes into play. It provides a stable virtual IP address and DNS name that routes traffic to the correct pod, abstracting away the pod’s lifecycle or dynamic IP changes.
Here’s the actual service.yaml you’re using:
apiVersion: v1
kind: Service
metadata:
name: mcp-server-service
namespace: mcp-server
spec:
selector:
app: mcp-server
ports:
– protocol: TCP
port: 80
targetPort: 8050
sessionAffinity: ClientIP
kind: Service
This object defines a stable networking abstraction that Kubernetes uses to route traffic to your MCP server pods.
selector: app: mcp-server
This selector ties the service to your deployment. It routes traffic to any pod with the label app: mcp-server, which was defined in your deployment.yaml.
Ports Mapping
port: 80: This is the port that other services or API Gateways inside the cluster will connect to.
targetPort: 8050: This is the actual port your MCP server application listens on, as specified in your Python server (mcp.run(…, port=8050)).
Session Affinity
sessionAffinity: ClientIP: This ensures that all requests from the same client IP are routed to the same pod. This is particularly important for long-lived connections or streaming protocols like Server-Sent Events (SSE), where session persistence is required.
This service acts as the internal gateway between the MCP Server and the rest of the cluster. While it’s not directly exposed to the internet, it forms the foundation for public exposure. In the next step, you’ll use an APIRule to route HTTPS traffic from the outside world into this service, enabling public, secure access to your MCP endpoint in the Kyma runtime.
Creating the APIRule: Secure External Access to the MCP Server via Kyma Gateway
After deploying your MCP Server and exposing it within the Kyma cluster through a Kubernetes Service, the final step is to make it publicly accessible over the internet. SAP BTP’s Kyma runtime handles this securely via an APIRule, which defines how external HTTPS traffic is routed into your internal services through Kyma’s managed ingress gateway.
Below is the apirule.yaml you’re using:
apiVersion: gateway.kyma-project.io/v2alpha1
kind: APIRule
metadata:
name: mcp-server-apirule
namespace: mcp-server
spec:
gateway: kyma-system/kyma-gateway
hosts:
– <your-domain>.kyma.ondemand.com
service:
name: mcp-server-service
namespace: mcp-server
port: 80
rules:
– path: /sse
methods: [“GET”]
noAuth: true
– path: /messages/{**}
methods: [“POST”]
noAuth: true
gateway: kyma-system/kyma-gateway
This refers to the default ingress controller managed by Kyma. It enables external HTTPS requests to reach your internal workloads securely.
hosts
This is the fully qualified public domain assigned to your MCP Server. Kyma automatically provisions and manages this hostname.
service
This section binds the APIRule to the internal mcp-server-service, which in turn routes traffic to the MCP Server pod(s).
rules
GET /sse: Exposes the Server-Sent Events (SSE) endpoint, allowing the client to open a persistent connection.
POST /messages/{**}: Exposes the endpoint used by the MCP Client to send tool invocation messages. The {**} wildcard ensures support for parameters like session_id.
noAuth: true
Disables authentication for this setup. While fine for testing or internal prototypes, in production scenarios you should consider integrating Kyma’s built-in OAuth2 or JWT-based access control.
This APIRule is the final piece that bridges your MCP Server with the outside world. With it, your Python-based MCP Client can now connect over the internet, invoke tools dynamically, and stream results in real-time, enabling seamless orchestration between local AI agents and enterprise-grade services running in SAP BTP Kyma.
Deploying the MCP Server into SAP BTP Kyma
With all your Kubernetes resource files prepared (secrets.yaml, deployment.yaml, service.yaml, and apirule.yaml), it’s time to deploy your MCP Server into the SAP BTP Kyma runtime. This step brings your custom tool-enabled MCP server into a scalable, cloud-native environment—ready for public use and integration.
Step-by-Step Deployment Using kubectl
Below is the standard deployment flow using the Kubernetes CLI:
1. Log into Your Kyma Cluster
Make sure your local kubectl is configured for your SAP BTP Kyma context:
kubectl config use-context <your-kyma-context>
2. Create the Target Namespace (If Not Already Created)
kubectl create namespace mcp-server
3. Apply Secrets
Securely inject your runtime credentials and connection strings using your previously defined secret:
kubectl apply -f secrets.yaml -n mcp-server
4. Deploy the MCP Server
This creates a pod with resource limits and environment configuration:
kubectl apply -f deployment.yaml -n mcp-server
5. Create the Internal Service
Expose your MCP Server internally within the cluster:
kubectl apply -f service.yaml -n mcp-server
6. Publish the Service Externally
Expose the server to the internet via Kyma’s ingress gateway using an APIRule:
kubectl apply -f apirule.yaml -n mcp-server
7. Monitor the Deployment
Track progress and verify that the server is fully running:
kubectl rollout status deployment mcp-server -n mcp-server
8. Optional: View Server Logs
Useful for debugging or real-time verification of requests:
kubectl logs -l app=mcp-server -n mcp-server –tail=100 -f
Observing MCP Server Health in the Kyma Dashboard
Once the MCP Server has been deployed to SAP BTP Kyma, the Kyma Dashboard becomes a powerful tool for real-time validation and monitoring. This section demonstrates how to visually confirm that your deployment is healthy, tools are being orchestrated correctly, and public endpoints are live.
Namespace Overview: mcp-server
In this view, we’re inspecting the mcp-server namespace in the Kyma Dashboard. This namespace encapsulates all Kubernetes resources related to our MCP Server.
Key indicators:
Status: The namespace is Active, confirming it’s live and isolated for MCP-related workloads.
Uptime: 2 days, showing that the deployment has remained stable over time.
CPU Usage: 0%, showing low computational load.
Memory Usage: 74%, reflecting moderate but efficient runtime resource consumption.
Pods and Deployments: Exactly 1 pod and 1 healthy deployment are running, matching our configuration.
Service: 1 Kubernetes service is listed, which acts as the internal bridge for routing to the MCP Server pod.
This is your first checkpoint after deployment to ensure the environment is healthy and that your resources are behaving as expected.
MCP Deployment View
In the Deployments section of the dashboard, we find the mcp-server deployment running successfully.
Pod count: 1 of 1 pod is running.
Image: Pulled directly from Docker Hub (carlosbasto/mcp-server:latest).
Status: Deployment marked as Available and Progressing, which confirms that the container has been started and is receiving traffic.
This confirms that the container image has been fetched and executed as intended in the Kyma environment.
MCP Pod Status
Under Pods, we see that the MCP Server pod (e.g., mcp-server-76fb848688-qnz5v) is listed as Running.
Port Exposure: Actively listening on port 8050/TCP, as defined in our server configuration.
Readiness Checks: All indicators show green, meaning the application is healthy and fully responsive.
Details: You can view the pod’s logs, restart count, node location, and IP, all useful for debugging and operational tracking.
This screen is essential for confirming the live runtime status of your MCP server.
Public API Exposure with APIRule
In the API Rules (v2alpha1) section, the mcp-server-apirule is visible and marked as Ready.
Public Domain: The hostname mcp-server-sap.c-49e33bd.stage.kyma.ondemand.com is now live and reachable over HTTPS.
Routes:
GET /sse: For streaming Server-Sent Events.
POST /messages/{**}: For sending tool invocation requests (e.g., listTools, callTool).
No Authentication: For PoC purposes, both endpoints are currently accessible without authentication.
This confirms your MCP Server is now publicly callable and available to external MCP clients.
Internal Service Binding
The Services section verifies the Kubernetes service responsible for internal routing:
Service Name: mcp-server-service
Type: ClusterIP, meaning it exposes the app inside the cluster.
Ports: Forwards from port 80 (internal) to 8050 (container).
Selector: Matches app=mcp-server, ensuring only the correct pod(s) receive traffic.
API Rule Integration: Linked directly to the APIRule, ensuring external traffic is routed into the cluster and delivered to the correct pod.
This confirms end-to-end traffic routing is functioning from the public endpoint, through the gateway, into the service, and finally reaching your MCP logic.
Understanding the MCP Client Architecture
The MCP Client is the counterpart to your deployed MCP server. It is responsible for driving the orchestration logic that connects a user’s query to tool selection, execution, and final response synthesis via a large language model (LLM) – in this case, SAP GenAI Hub Orchestration.
Let’s walk through the key components of this client-side system, focusing on the orchestration flow and how it adheres to the Model Context Protocol (MCP).
Establishing the MCP Client Connection
At the heart of the client architecture is the ClientSession object, which handles the entire MCP session lifecycle. It is initialized using the sse_client() function, which opens a Server-Sent Events (SSE) channel to the /sse endpoint exposed by the MCP server.
async with sse_client(“https://your-server/sse”) as (read_stream, write_stream):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()
This setup enables bi-directional, asynchronous communication between the client and the server:
The client sends JSON-RPC messages such as listTools and callTool.
The server responds with results via the same open stream. This mechanism is both lightweight and robust, making it ideal for LLM-based tool orchestration.
LLM Orchestration with SAP GenAI Hub
The orchestration framework leverages SAP’s GenAI Hub SDK to:
Define prompt structure using Template.
Control the generation process with OrchestrationConfig.
Trigger the prompt evaluation via OrchestrationService.
These abstractions enable the LLM to reason over user intent, dynamically select tools, and produce intermediate structured outputs (e.g., a JSON-based tool call plan).
MCPAgentExecutor
The MCPAgentExecutor class encapsulates the full agentic reasoning cycle. Its responsibilities include:
Instruction Generation
Uses _generate_instruction() to describe the available tools (fetched via list_tools()) and guide the LLM on how to respond with a structured plan.
LLM-Driven Tool Selection
Runs a prompt using GenAI Hub’s LLM with a schema defined by _build_dynamic_schema(). The output is a list of tool calls the model has decided to invoke.
Tool Invocation
Each tool in the plan is executed via the MCP session’s call_tool() method. Results are collected for final synthesis.
Final Answer Construction
The method _finalize_response() re-invokes the LLM, passing the tool results and user query to generate a clear, human-readable answer.
This modular design makes it easy to extend the agent with new capabilities, handle errors gracefully, and improve tool response conditioning over time.
Running the Agent
Here’s the minimal main() loop to execute the system:
llm = LLM(name=”gpt-4o”, version=”latest”, parameters={“max_tokens”: 2000})
agent = MCPAgentExecutor(llm=llm, mcp_session=session, verbose=False)
result = await agent.run(“What time is it in Paris?”)
print(“Final Answer:”, result)
You can ask anything (weather updates, contextual queries, or domain-specific lookups) and the client will:
Ask the MCP server for available tools
Let the LLM decide what to invoke
Return a synthesized, context-aware answer
Applying Model Context Protocol (MCP) to Build Scalable Agentic AI
As approached in our previous blog, we explored how to build an agentic AI system using SAP Generative AI Hub. That system demonstrated how an LLM could reason over user input and dynamically select tools such as a retriever, a clock, and a weather API. While powerful, that implementation relied on custom logic rather than a standardized protocol for tool communication.
In this new implementation, we take the same intelligent agentic flow, but this time using the Model Context Protocol (MCP). MCP introduces a formal interface for how LLMs interact with tools. This enables a more scalable, interoperable, and maintainable system architecture.
Key Capabilities Introduced by MCP
By leveraging MCP as the backbone of this new system, we unlock a number of important benefits:
Context persistence across multiple tool invocations
Protocol standardization via JSON-RPC with well-defined lifecycle methods (listTools, callTool, etc.)
Schema-constrained interaction, which improves both safety and reasoning reliability
Declarative tool registration that removes coupling between infrastructure and logic
Support for streaming via SSE, making it production-ready for real-time agent interaction
Same Prompts with MCP Architecture
To validate our MCP-based implementation, we reran the same test prompts as in our previous blog:
“What time is it?”
“How’s the weather in Paris?”
“What is SAP Business Data Cloud?”
In each case, the responses were consistent in clarity and precision, proof that adding protocol structure does not compromise output quality.
Let’s now try a more complex prompt that activates multiple tools simultaneously.
“Can you give me a bread recipe and also tell me: how’s the weather in Brazil, what time it is now, and what is the role of SAP Datasphere in SAP Business Data Cloud?”
The LLM generated a structured tool call plan using the MCP-defined schema:
{
“tool_calls”: [
{
“decision”: “tool”,
“function”: “get_weather”,
“parameters”: { “latitude”: -14.235, “longitude”: -51.9253 }
},
{
“decision”: “tool”,
“function”: “get_time_now”,
“parameters”: {}
},
{
“decision”: “tool”,
“function”: “retriever”,
“parameters”: {
“question”: “What is the role of SAP Datasphere in SAP Business Data Cloud?”
}
}
]
}
This plan was executed sequentially by the MCP client. Here’s what the agent delivered back:
Final Answer: Here’s the information you requested:
1. **Weather in Brazil**: The current temperature is 32.1°C with a wind speed of 9.3 m/s.
2. **Current Time**: The local time is 17:15 on April 30, 2025.
3. **Role of SAP Datasphere in SAP Business Data Cloud**: SAP Datasphere is essential in the SAP Business Data Cloud as it serves as a platform for installing and working with data products and insight apps. It integrates data from various sources like SAP BW and SAP HANA Cloud, enabling effective data management and analysis. Key roles include:
– Installation of insight apps, creating SAP-managed spaces for app content.
– Data integration from SAP Business Data Cloud for modeling projects.
– Data management, including access control and monitoring integration tasks.
– Data preparation for analytics and exposure for consumption in SAP Analytics Cloud.
4. **Bread Recipe**: Here’s a simple bread recipe for you:
– **Ingredients**:
– 500g bread flour
– 7g instant yeast
– 10g salt
– 300ml water
– **Instructions**:
1. Mix the flour, yeast, and salt in a large bowl.
2. Gradually add water and mix until a dough forms.
3. Knead the dough on a floured surface for about 10 minutes until smooth.
4. Place the dough in a lightly oiled bowl, cover, and let it rise until doubled in size (about 1 hour).
5. Preheat the oven to 220°C (428°F).
6. Shape the dough into a loaf, place it on a baking tray, and let it rise for another 30 minutes.
7. Bake for 25-30 minutes until golden brown and hollow-sounding when tapped.
8. Let it cool before slicing.
Enjoy your homemade bread!
As we move toward building more complex, multi-agent AI systems, using a protocol like MCP helps structure interactions more clearly and manage tools more reliably.
Wrapping Up and Next Steps
In this post, we extended our agentic AI system by incorporating the Model Context Protocol (MCP) into the architecture. Using SAP Generative AI Hub alongside an MCP-compliant server allowed us to replicate the orchestration quality shown in our previous blog while gaining the benefits of structured communication, clearer separation of concerns, and improved interoperability.
The agent was able to reason through multiple tool calls, execute them as needed, and synthesize the results into a single coherent response. This was all handled through a standardized protocol that encourages modular design and reuse across environments.
As scenarios become more complex and toolchains grow, MCP provides a practical foundation for building systems where logic, tools, and models remain decoupled yet work together reliably.
If you’re interested in trying this yourself, the full codebase is available in the repository linked below. It includes the Docker configuration, Kyma deployment files, and a working MCP client.
Looking forward to hearing your thoughts and suggestions as we continue evolving agentic AI with practical, extensible designs.
Happy building!
Further References
Source Code: GitHub repositoryModel Context Protocol (MCP)Model Context Protocol (MCP) RepositorySAP Libraries and SDKsGenerative AI Hub SDKOrchestration
In our previous blog “Building an Agentic AI System with SAP Generative AI Hub”, we explored how to build an agentic AI system using SAP Generative AI Hub and LangChain. That solution demonstrated how LLMs can dynamically decide which tools to invoke in order to respond intelligently to complex, multi-part questions. While the core logic of tool-based orchestration worked well, the implementation was still tightly coupled to our Python application logic.In this blog, we take the next step toward standardization and enterprise-grade extensibility by introducing the Model Context Protocol (MCP). MCP defines a communication protocol for tool-augmented AI systems, separating host applications (which orchestrate tool usage) from tool servers (which expose specific capabilities through the standardized Model Context Protocol). This allows for a more modular and scalable architecture.Here, we’ll show how we’ve implemented a custom MCP server using Python and deployed it on SAP BTP Kyma Runtime. By doing so, we leverage Kubernetes-native scalability and maintain a clear separation between our tool infrastructure and the LLM client logic. The result: we maintain the same dynamic capabilities from our first blog, but now, they’re exposed as MCP-compliant services running in the cloud.Let’s explore what this looks like, how it works in action, and how it sets the stage for even more advanced AI orchestration scenarios.Architecture Overview of the MCP-Based Agentic AI SystemTo bring modularity, flexibility, and enterprise-grade scale to our agentic AI system, we’ve adopted a Model Context Protocol (MCP) architecture. The diagram below captures the key components and how they interact within our deployment landscape on SAP BTP Kyma Runtime.At the heart of this solution is the MCP Client, which runs as a Python-based AI application. This component is responsible for:Sending user prompts,Interacting with the LLM (e.g., GPT-4o via SAP GenAI Hub),And deciding which tools to invoke based on tool metadata.But instead of implementing tools directly in the client/host, the tools are offloaded to a dedicated MCP Server, deployed as a container in Kyma. Developing the MCP Server to Expose AI ToolsIn this section, we explore the server-side implementation of our Agentic AI system using the Model Context Protocol (MCP). The server is implemented in Python using the FastMCP utility, a lightweight and spec-compliant server framework that enables tool registration and execution over Server-Sent Events (SSE). This setup allows any LLM orchestration layer to dynamically discover and invoke tools exposed by this server.The MCP server in this blog is deployed on SAP BTP Kyma and exposes three callable tools: get_weather, get_time_now, and retriever (a Vector Engine powered by SAP HANA Cloud). These tools extend the capabilities of the connected LLM client by providing access to real-time and domain-specific information. Server InitializationThe server is instantiated using:mcp = FastMCP(name=”SAP”, host=”0.0.0.1″, port=8050)This blog focuses on non-productive usage of the MCP architecture; therefore, it is strongly recommended to review and address security considerations before applying this approach in any production environment. Intelligent Tooling via Declarative RegistrationEach tool is declared using the mcp.tool decorator, which handles registration, schema extraction, and tool metadata exposure:Tool: retrieverConnects to SAP HANA Cloud’s vector engine to perform vector search over enterprise documents. Powered by LangChain and OpenAI-compatible embeddings via SAP AI Core and GenAI Hub, this tool retrieves relevant content and uses an LLM to generate answers.Tool: get_time_nowProvides the current local server time, useful in temporal reasoning or grounding responses.Tool: get_weatherCalls a public API to retrieve current weather data for a specified latitude and longitude.All tools follow the MCP specification for discovery (listTools) and execution (callTool), enabling zero-friction orchestration from the client side. Starting the MCP LoopAt the end of the script, the server is started via:if __name__ == “__main__”:
mcp.run(transport=”sse”)This activates the MCP runtime, opening an SSE-based stream endpoint for orchestration. As per the 2024-11 MCP specification, this includes:Persistent connection over /sse for streaming events.Session management (using session_id).Bidirectional tool call handling via JSON-RPC.The server then becomes fully compatible with any compliant MCP client, such as a LangChain-based AgentExecutor or a custom orchestrator built using SAP Generative AI Hub’s SDK.This MCP server represents a modular, extensible, and production-ready approach to exposing tool-based intelligence within enterprise AI systems. By leveraging SAP BTP Kyma for scalability and the MCP standard for interoperability, the architecture decouples tool logic from LLM reasoning. Deploying the MCP Server to SAP BTP Kyma RuntimeAfter verifying your MCP server works as expected in a local environment, the next logical step is to deploy it to a scalable, secure, and cloud-native runtime. SAP BTP’s Kyma runtime provides the perfect landing zone for containerized AI microservices, especially when you need full Kubernetes flexibility combined with native SAP integration. Why Kyma on SAP BTP?SAP BTP Kyma runtime is a managed Kubernetes offering built on the open-source Kyma framework. It lets you run containers, expose APIs with secure gateways, and interact with SAP services and extensions through eventing and service bindings. In our use case, Kyma is good approach for hosting the MCP Server so it can dynamically serve tools like weather APIs, real-time clocks, or vector search over SAP HANA Cloud. However, MCP servers can be deployed on any infrastructure without compromising the protocol’s modular architecture or capabilities. Containerizing the MCP Server with DockerTo containerize the MCP server, we define a lightweight Dockerfile based on python:3.11-slim and use uv for efficient dependency management. The file below installs the dependencies, copies the server script, and runs the MCP Server on port 8050.FROM python:3.11-slim
WORKDIR /app
# Install uv for faster package management
RUN pip install uv
# Copy requirements file
COPY requirements.txt .
# Install dependencies using uv
RUN uv venv && uv pip install –no-cache-dir -r requirements.txt
# Copy application code
COPY server.py .
# Expose the port the server runs on
EXPOSE 8050
# Command to run the server
CMD [“uv”, “run”, “server.py”]This Dockerfile ensures a fast, clean image build optimized for Kyma deployment.Pushing to Docker HubOnce the image is built, you’ll want to push it to Docker Hub (or any container registry accessible to your Kyma cluster). If you’re using a Mac or local ARM-based system but deploying to Azure (x86_64), you’ll need to explicitly build the image for the correct architecture.docker buildx build –platform linux/amd64 -t carlosbasto/mcp-server:latest –push .The use of –platform linux/amd64 ensures compatibility with Kyma’s underlying nodes (which typically run on Intel-based VMs in Azure). This avoids runtime errors due to architecture mismatches when the image is pulled and executed in the cluster.Once pushed, your MCP server container is ready for Kyma. Creating secrets.yaml – Secure Environment Configuration for the MCP ServerBefore deploying your MCP Server to SAP BTP Kyma, it’s critical to configure all runtime credentials and sensitive values securely. Hardcoding such details directly into your container image or deployment files would pose both security and maintainability risks.Kubernetes provides a native mechanism for managing these configurations through Secrets, enabling secure injection of credentials and tokens into your pods at runtime.Below is the secrets.yaml file used to configure the MCP Server environment in a secure and decoupled way:apiVersion: v1
kind: Secret
metadata:
name: mcp-env-secrets
type: Opaque
stringData:
TRANSPORT: “<insert your string here>”
HANA_HOST: “<insert your string here>”
HANA_USER: “<insert your string here>”
HANA_PASSWORD: “<insert your string here>”
AICORE_CLIENT_ID: “<insert your string here>”
AICORE_CLIENT_SECRET: “<insert your string here>”
AICORE_AUTH_URL: “<insert your string here>”
AICORE_BASE_URL: “<insert your string here>”
AICORE_RESOURCE_GROUP: “<insert your string here>”type: OpaqueThis is the standard Kubernetes secret type for storing arbitrary key-value pairs that don’t follow a specific format (e.g., TLS or Docker config).stringData vs datastringData allows you to input plain-text values. These will be automatically base64-encoded by Kubernetes and stored securely in the cluster.Sensitive Variables ExplainedHANA_HOST, HANA_USER, HANA_PASSWORD: Credentials used by the retriever tool to connect to the SAP HANA vector store.AICORE_*: Credentials and endpoints for authenticating with SAP AI Core or the SAP Generative AI Hub, required for embedding generation and LLM inference (e.g. Orchestration).TRANSPORT: (Optional) Can be used to control runtime behavior, such as selecting between “sse” or “http” transport modes when running the MCP server.Link this secret to your Deployment.yaml using the envFrom.secretRef field. This ensures the MCP Server automatically receives all necessary environment variables without exposing them directly in the container image or repository. Defining the MCP Server Deployment in KymaOnce your Docker image is ready and your secrets are securely stored in the cluster, the next step is to define how the MCP Server should run within SAP BTP’s Kyma runtime. This is done using a Kubernetes Deployment, which ensures your server is reliably started and managed within the cluster.Below is your actual deployment.yaml along with a breakdown of its key components:apiVersion: apps/v1
kind: Deployment
metadata:
name: mcp-server
labels:
app: mcp-server
spec:
replicas: 1
selector:
matchLabels:
app: mcp-server
template:
metadata:
labels:
app: mcp-server
env: prod
owner: carlosbasto
spec:
containers:
– name: mcp-server
image: carlosbasto/mcp-server:latest
ports:
– containerPort: 8050
envFrom:
– secretRef:
name: mcp-env-secrets
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512MiDeployment TypeThis YAML file defines a Deployment, which is a Kubernetes controller used to manage stateless workloads. It ensures that the desired number of MCP server replicas are running at all times and supports auto-restart on failure or node rescheduling.Metadata and Labelsname: mcp-server: Defines the unique deployment name.labels: These are used to group, identify, and select this deployment (e.g., for services or monitoring).Custom labels like env: prod and owner: carlosbasto help with environment tracking and cost attribution.Replica Countreplicas: 1: Specifies that one instance (pod) of the MCP server should run. You can easily scale this to multiple replicas later for high availability or horizontal scaling.Pod Selector and TemplatematchLabels: app: mcp-server: Ensures this deployment manages pods with the corresponding label.The template section defines the actual pod spec (containers, ports, envs, etc.).Container Definitionimage: carlosbasto/mcp-server:latest: References the MCP server image pushed to Docker Hub.containerPort: 8050: The server listens on port 8050; this should match the port exposed in your Dockerfile.Environment Variables via SecretsenvFrom.secretRef.name: mcp-env-secrets: Injects all the key-value pairs defined in your secrets.yaml as environment variables in the container—safely and declaratively.Resource Managementrequests: Reserves a guaranteed baseline of 250 millicores CPU and 256Mi of memory.limits: Caps the container at 500 millicores CPU and 512Mi memory to ensure predictable usage.This configuration ensures your MCP server runs in a secure, lightweight, and production-ready manner within the Kyma environment. Up next, you’ll define how to expose this server externally using a Kubernetes Service and APIRule. Creating the Kubernetes Service: Exposing the MCP Server InternallyOnce the MCP Server is running inside a Kyma-managed Kubernetes pod, the next step is to make it discoverable and accessible within the cluster. This is where a Kubernetes Service comes into play. It provides a stable virtual IP address and DNS name that routes traffic to the correct pod, abstracting away the pod’s lifecycle or dynamic IP changes.Here’s the actual service.yaml you’re using:apiVersion: v1
kind: Service
metadata:
name: mcp-server-service
namespace: mcp-server
spec:
selector:
app: mcp-server
ports:
– protocol: TCP
port: 80
targetPort: 8050
sessionAffinity: ClientIPkind: ServiceThis object defines a stable networking abstraction that Kubernetes uses to route traffic to your MCP server pods.selector: app: mcp-serverThis selector ties the service to your deployment. It routes traffic to any pod with the label app: mcp-server, which was defined in your deployment.yaml.Ports Mappingport: 80: This is the port that other services or API Gateways inside the cluster will connect to.targetPort: 8050: This is the actual port your MCP server application listens on, as specified in your Python server (mcp.run(…, port=8050)).Session AffinitysessionAffinity: ClientIP: This ensures that all requests from the same client IP are routed to the same pod. This is particularly important for long-lived connections or streaming protocols like Server-Sent Events (SSE), where session persistence is required.This service acts as the internal gateway between the MCP Server and the rest of the cluster. While it’s not directly exposed to the internet, it forms the foundation for public exposure. In the next step, you’ll use an APIRule to route HTTPS traffic from the outside world into this service, enabling public, secure access to your MCP endpoint in the Kyma runtime. Creating the APIRule: Secure External Access to the MCP Server via Kyma GatewayAfter deploying your MCP Server and exposing it within the Kyma cluster through a Kubernetes Service, the final step is to make it publicly accessible over the internet. SAP BTP’s Kyma runtime handles this securely via an APIRule, which defines how external HTTPS traffic is routed into your internal services through Kyma’s managed ingress gateway.Below is the apirule.yaml you’re using:apiVersion: gateway.kyma-project.io/v2alpha1
kind: APIRule
metadata:
name: mcp-server-apirule
namespace: mcp-server
spec:
gateway: kyma-system/kyma-gateway
hosts:
– <your-domain>.kyma.ondemand.com
service:
name: mcp-server-service
namespace: mcp-server
port: 80
rules:
– path: /sse
methods: [“GET”]
noAuth: true
– path: /messages/{**}
methods: [“POST”]
noAuth: truegateway: kyma-system/kyma-gatewayThis refers to the default ingress controller managed by Kyma. It enables external HTTPS requests to reach your internal workloads securely.hostsThis is the fully qualified public domain assigned to your MCP Server. Kyma automatically provisions and manages this hostname.serviceThis section binds the APIRule to the internal mcp-server-service, which in turn routes traffic to the MCP Server pod(s).rulesGET /sse: Exposes the Server-Sent Events (SSE) endpoint, allowing the client to open a persistent connection.POST /messages/{**}: Exposes the endpoint used by the MCP Client to send tool invocation messages. The {**} wildcard ensures support for parameters like session_id.noAuth: trueDisables authentication for this setup. While fine for testing or internal prototypes, in production scenarios you should consider integrating Kyma’s built-in OAuth2 or JWT-based access control.This APIRule is the final piece that bridges your MCP Server with the outside world. With it, your Python-based MCP Client can now connect over the internet, invoke tools dynamically, and stream results in real-time, enabling seamless orchestration between local AI agents and enterprise-grade services running in SAP BTP Kyma. Deploying the MCP Server into SAP BTP KymaWith all your Kubernetes resource files prepared (secrets.yaml, deployment.yaml, service.yaml, and apirule.yaml), it’s time to deploy your MCP Server into the SAP BTP Kyma runtime. This step brings your custom tool-enabled MCP server into a scalable, cloud-native environment—ready for public use and integration.Step-by-Step Deployment Using kubectlBelow is the standard deployment flow using the Kubernetes CLI:1. Log into Your Kyma ClusterMake sure your local kubectl is configured for your SAP BTP Kyma context:kubectl config use-context <your-kyma-context>2. Create the Target Namespace (If Not Already Created)kubectl create namespace mcp-server3. Apply SecretsSecurely inject your runtime credentials and connection strings using your previously defined secret:kubectl apply -f secrets.yaml -n mcp-server4. Deploy the MCP ServerThis creates a pod with resource limits and environment configuration:kubectl apply -f deployment.yaml -n mcp-server5. Create the Internal ServiceExpose your MCP Server internally within the cluster:kubectl apply -f service.yaml -n mcp-server6. Publish the Service ExternallyExpose the server to the internet via Kyma’s ingress gateway using an APIRule:kubectl apply -f apirule.yaml -n mcp-server7. Monitor the DeploymentTrack progress and verify that the server is fully running:kubectl rollout status deployment mcp-server -n mcp-server8. Optional: View Server LogsUseful for debugging or real-time verification of requests:kubectl logs -l app=mcp-server -n mcp-server –tail=100 -f Observing MCP Server Health in the Kyma DashboardOnce the MCP Server has been deployed to SAP BTP Kyma, the Kyma Dashboard becomes a powerful tool for real-time validation and monitoring. This section demonstrates how to visually confirm that your deployment is healthy, tools are being orchestrated correctly, and public endpoints are live.Namespace Overview: mcp-serverIn this view, we’re inspecting the mcp-server namespace in the Kyma Dashboard. This namespace encapsulates all Kubernetes resources related to our MCP Server. Key indicators:Status: The namespace is Active, confirming it’s live and isolated for MCP-related workloads.Uptime: 2 days, showing that the deployment has remained stable over time.CPU Usage: 0%, showing low computational load.Memory Usage: 74%, reflecting moderate but efficient runtime resource consumption.Pods and Deployments: Exactly 1 pod and 1 healthy deployment are running, matching our configuration.Service: 1 Kubernetes service is listed, which acts as the internal bridge for routing to the MCP Server pod.This is your first checkpoint after deployment to ensure the environment is healthy and that your resources are behaving as expected.MCP Deployment ViewIn the Deployments section of the dashboard, we find the mcp-server deployment running successfully.Pod count: 1 of 1 pod is running.Image: Pulled directly from Docker Hub (carlosbasto/mcp-server:latest).Status: Deployment marked as Available and Progressing, which confirms that the container has been started and is receiving traffic.This confirms that the container image has been fetched and executed as intended in the Kyma environment. MCP Pod StatusUnder Pods, we see that the MCP Server pod (e.g., mcp-server-76fb848688-qnz5v) is listed as Running.Port Exposure: Actively listening on port 8050/TCP, as defined in our server configuration.Readiness Checks: All indicators show green, meaning the application is healthy and fully responsive.Details: You can view the pod’s logs, restart count, node location, and IP, all useful for debugging and operational tracking.This screen is essential for confirming the live runtime status of your MCP server.Public API Exposure with APIRuleIn the API Rules (v2alpha1) section, the mcp-server-apirule is visible and marked as Ready.Public Domain: The hostname mcp-server-sap.c-49e33bd.stage.kyma.ondemand.com is now live and reachable over HTTPS.Routes:GET /sse: For streaming Server-Sent Events.POST /messages/{**}: For sending tool invocation requests (e.g., listTools, callTool).No Authentication: For PoC purposes, both endpoints are currently accessible without authentication.This confirms your MCP Server is now publicly callable and available to external MCP clients. Internal Service BindingThe Services section verifies the Kubernetes service responsible for internal routing:Service Name: mcp-server-serviceType: ClusterIP, meaning it exposes the app inside the cluster.Ports: Forwards from port 80 (internal) to 8050 (container).Selector: Matches app=mcp-server, ensuring only the correct pod(s) receive traffic.API Rule Integration: Linked directly to the APIRule, ensuring external traffic is routed into the cluster and delivered to the correct pod.This confirms end-to-end traffic routing is functioning from the public endpoint, through the gateway, into the service, and finally reaching your MCP logic. Understanding the MCP Client ArchitectureThe MCP Client is the counterpart to your deployed MCP server. It is responsible for driving the orchestration logic that connects a user’s query to tool selection, execution, and final response synthesis via a large language model (LLM) – in this case, SAP GenAI Hub Orchestration.Let’s walk through the key components of this client-side system, focusing on the orchestration flow and how it adheres to the Model Context Protocol (MCP). Establishing the MCP Client ConnectionAt the heart of the client architecture is the ClientSession object, which handles the entire MCP session lifecycle. It is initialized using the sse_client() function, which opens a Server-Sent Events (SSE) channel to the /sse endpoint exposed by the MCP server.async with sse_client(“https://your-server/sse”) as (read_stream, write_stream):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()This setup enables bi-directional, asynchronous communication between the client and the server:The client sends JSON-RPC messages such as listTools and callTool.The server responds with results via the same open stream. This mechanism is both lightweight and robust, making it ideal for LLM-based tool orchestration. LLM Orchestration with SAP GenAI HubThe orchestration framework leverages SAP’s GenAI Hub SDK to:Define prompt structure using Template.Control the generation process with OrchestrationConfig.Trigger the prompt evaluation via OrchestrationService.These abstractions enable the LLM to reason over user intent, dynamically select tools, and produce intermediate structured outputs (e.g., a JSON-based tool call plan). MCPAgentExecutorThe MCPAgentExecutor class encapsulates the full agentic reasoning cycle. Its responsibilities include:Instruction GenerationUses _generate_instruction() to describe the available tools (fetched via list_tools()) and guide the LLM on how to respond with a structured plan.LLM-Driven Tool SelectionRuns a prompt using GenAI Hub’s LLM with a schema defined by _build_dynamic_schema(). The output is a list of tool calls the model has decided to invoke.Tool InvocationEach tool in the plan is executed via the MCP session’s call_tool() method. Results are collected for final synthesis.Final Answer ConstructionThe method _finalize_response() re-invokes the LLM, passing the tool results and user query to generate a clear, human-readable answer.This modular design makes it easy to extend the agent with new capabilities, handle errors gracefully, and improve tool response conditioning over time. Running the AgentHere’s the minimal main() loop to execute the system:llm = LLM(name=”gpt-4o”, version=”latest”, parameters={“max_tokens”: 2000})
agent = MCPAgentExecutor(llm=llm, mcp_session=session, verbose=False)
result = await agent.run(“What time is it in Paris?”)
print(“Final Answer:”, result)You can ask anything (weather updates, contextual queries, or domain-specific lookups) and the client will:Ask the MCP server for available toolsLet the LLM decide what to invokeReturn a synthesized, context-aware answer Applying Model Context Protocol (MCP) to Build Scalable Agentic AIAs approached in our previous blog, we explored how to build an agentic AI system using SAP Generative AI Hub. That system demonstrated how an LLM could reason over user input and dynamically select tools such as a retriever, a clock, and a weather API. While powerful, that implementation relied on custom logic rather than a standardized protocol for tool communication.In this new implementation, we take the same intelligent agentic flow, but this time using the Model Context Protocol (MCP). MCP introduces a formal interface for how LLMs interact with tools. This enables a more scalable, interoperable, and maintainable system architecture.Key Capabilities Introduced by MCPBy leveraging MCP as the backbone of this new system, we unlock a number of important benefits:Context persistence across multiple tool invocationsProtocol standardization via JSON-RPC with well-defined lifecycle methods (listTools, callTool, etc.)Schema-constrained interaction, which improves both safety and reasoning reliabilityDeclarative tool registration that removes coupling between infrastructure and logicSupport for streaming via SSE, making it production-ready for real-time agent interaction Same Prompts with MCP ArchitectureTo validate our MCP-based implementation, we reran the same test prompts as in our previous blog:”What time is it?””How’s the weather in Paris?””What is SAP Business Data Cloud?”In each case, the responses were consistent in clarity and precision, proof that adding protocol structure does not compromise output quality.Let’s now try a more complex prompt that activates multiple tools simultaneously.“Can you give me a bread recipe and also tell me: how’s the weather in Brazil, what time it is now, and what is the role of SAP Datasphere in SAP Business Data Cloud?”The LLM generated a structured tool call plan using the MCP-defined schema:{
“tool_calls”: [
{
“decision”: “tool”,
“function”: “get_weather”,
“parameters”: { “latitude”: -14.235, “longitude”: -51.9253 }
},
{
“decision”: “tool”,
“function”: “get_time_now”,
“parameters”: {}
},
{
“decision”: “tool”,
“function”: “retriever”,
“parameters”: {
“question”: “What is the role of SAP Datasphere in SAP Business Data Cloud?”
}
}
]
}This plan was executed sequentially by the MCP client. Here’s what the agent delivered back:Final Answer: Here’s the information you requested:
1. **Weather in Brazil**: The current temperature is 32.1°C with a wind speed of 9.3 m/s.
2. **Current Time**: The local time is 17:15 on April 30, 2025.
3. **Role of SAP Datasphere in SAP Business Data Cloud**: SAP Datasphere is essential in the SAP Business Data Cloud as it serves as a platform for installing and working with data products and insight apps. It integrates data from various sources like SAP BW and SAP HANA Cloud, enabling effective data management and analysis. Key roles include:
– Installation of insight apps, creating SAP-managed spaces for app content.
– Data integration from SAP Business Data Cloud for modeling projects.
– Data management, including access control and monitoring integration tasks.
– Data preparation for analytics and exposure for consumption in SAP Analytics Cloud.
4. **Bread Recipe**: Here’s a simple bread recipe for you:
– **Ingredients**:
– 500g bread flour
– 7g instant yeast
– 10g salt
– 300ml water
– **Instructions**:
1. Mix the flour, yeast, and salt in a large bowl.
2. Gradually add water and mix until a dough forms.
3. Knead the dough on a floured surface for about 10 minutes until smooth.
4. Place the dough in a lightly oiled bowl, cover, and let it rise until doubled in size (about 1 hour).
5. Preheat the oven to 220°C (428°F).
6. Shape the dough into a loaf, place it on a baking tray, and let it rise for another 30 minutes.
7. Bake for 25-30 minutes until golden brown and hollow-sounding when tapped.
8. Let it cool before slicing.
Enjoy your homemade bread!As we move toward building more complex, multi-agent AI systems, using a protocol like MCP helps structure interactions more clearly and manage tools more reliably. Wrapping Up and Next StepsIn this post, we extended our agentic AI system by incorporating the Model Context Protocol (MCP) into the architecture. Using SAP Generative AI Hub alongside an MCP-compliant server allowed us to replicate the orchestration quality shown in our previous blog while gaining the benefits of structured communication, clearer separation of concerns, and improved interoperability.The agent was able to reason through multiple tool calls, execute them as needed, and synthesize the results into a single coherent response. This was all handled through a standardized protocol that encourages modular design and reuse across environments.As scenarios become more complex and toolchains grow, MCP provides a practical foundation for building systems where logic, tools, and models remain decoupled yet work together reliably.If you’re interested in trying this yourself, the full codebase is available in the repository linked below. It includes the Docker configuration, Kyma deployment files, and a working MCP client.Looking forward to hearing your thoughts and suggestions as we continue evolving agentic AI with practical, extensible designs.Happy building! Further ReferencesSource Code: GitHub repositoryModel Context Protocol (MCP)Model Context Protocol (MCP) RepositorySAP Libraries and SDKsGenerative AI Hub SDKOrchestration Read More Technology Blogs by SAP articles
#SAP
#SAPTechnologyblog