Hi all,
as you may have heared, SAP recently introduced RPT-1 (Relational Pre-Trained Transformer). To explore how well the model will work and if it is able to return different outputs, I integrated it into a small SAP CAP application. My choosen dataset contains historical data with sample customer pizza orders (order day, order time, pizza type, order quantity, etc.).
The goal: predict each customer’s next order day and next pizza type 🍕.
RPT-1 in Action
A standard Fiori Elements list report displays the historical data that serves as input. When the user selects a customer and clicks “Trigger Prediction,” the CAP backend sends data to the RPT-1 API and receives the predicted results. These are then shown in charts within a dialog.
Short program flow overview:
CAPreads historical order data from (HANA-) DBBackend constructs the payload that RPT-1 expects. (API key and endpoint details can be found in the official documentation.)Model returns predicted columns (next_day, next_pizza). These fields must exist in the data structure as wellBackend merges historical data and prediction resultsFrontend visualizes both predicted sets using VizFrame charts
I found it very useful to try out RPT-1 in the Playground first before building a full application. Here you can experiment with your dataset and understand the prediction behavior as well as how you might need to adjust your data set to get the desired outcomes.
If you are are curious like me how that works in CAP, I have listed my doings below:
Set up your environment – Use SAP Business Application Studio (BAS) or VS Code with the SAP Fiori and CAP extensions installed. Create a new CAP project or a Full-Stack application using the Productivity Tools
Prepare your data – Create a CSV file with columns like customer, order_day, pizza_type, quantity, etc., and deploy it to your database (SQLite or SAP HANA using cds deploy –to hana / sqllite: db.sqlite
Get access to RPT-1 – Use the API endpoint: https://rpt.cloud.sap/api/predict The API token is available on the SAP RPT-1 documentation page (after logging in with your S-user)
Prepare the payload & Call the RPT-1 API – I tried to get the payload in the form like mentioned in the documentation
const historic = await SELECT.from(HistoricalData);
const trainRows = historic
// optionally focus payload on one customer:
.filter(r => !focusCustomer || r.customer_id === focusCustomer)
.map(r => ({
customer_id: r.customer_id,
pizza_name: r.pizza_name,
order_day: r.order_day,
next_order_day: r.next_order_day,
next_pizza_name: r.next_pizza_name
}));
Screenshot from expected payload:
Do not forget to provide the data set which you want to predict (normally that is all in your initial data set and marked with “[PREDICT]”, but to explicitly show it, I also hardcoded one line of data where I want to get the prediction results)
const predictRows = [
{ customer_id: focusCustomer, pizza_name: “Margherita”, order_day: “Thu”, next_order_day: “[PREDICT]”, next_pizza_name: “[PREDICT]” }
]
Constructed Payload in the service.js file
const payload = {
rows: […trainRows, …predictRows],
index_column: “customer_id”
};
Once these preparation steps are done , I used axios to call the above described API route (remember to get you API Token)
try {
const response = await axios.post(API_URL, payload, {
headers: {
“Authorization”: `Bearer ${TOKEN}`,
“Content-Type”: “application/json”,
},
});
console.log(“Prediction result:”, JSON.stringify(response.data, null, 2));
} catch (error) {
console.error(“RPT-1 error:”, error.response?.data || error.message);
};
}
Once the API call was successful, the API returns a json object (including your historical send data along with the prediction result). Here an extract of the payload I received:
Prediction result: {
“prediction”: {
“id”: “782d5f80-0dd3-4b06-a589-985cfe5a41db”,
“metadata”: {
“num_columns”: 4,
“num_predict_rows”: 3,
“num_predict_tokens”: 6,
“num_rows”: 24
},
“predictions”: [
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Sat”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Seafood Special”
}
]
},
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Thu”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Margherita”
}
]
},
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Sun”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Margherita”
}
]
}
]
},
“delay”: 272.4773660004139,
“aiApiRequestPayload”: {
“prediction_config”: {
“target_columns”: [
{
“name”: “next_order_day”,
“placeholder_value”: “[PREDICT]”,
“task_type”: “classification”
},
{
“name”: “next_pizza_name”,
“placeholder_value”: “[PREDICT]”,
“task_type”: “classification”
}
]
},
“rows”: [ //rows which where present here in my historical data
{
“customer_id”: “C004”,
“pizza_name”: “Seafood Special”,
“order_day”: “Sun”,
“next_order_day”: “Sat”,
“next_pizza_name”: “Margherita”
}, … // more rows
In case you care only about the preduction result (for further processing) you will find that in:
predictions = response.data.prediction.predictions”predictions”: [
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Sat”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Seafood Special”
}
]
},
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Thu”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Margherita”
}
]
},
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Sun”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Margherita”
}
]
}
]
Eventhough we have the predicted results, in most of the cases we want to combine the hostorical data with predicted results, to display a kind of graphical trend / relation.
In my case, to get the shown graphs which could be the next potential order day and the next potential pizza, I combined the data in an object and returned it to my Handler.js file. To get the desired outcome within the graphical illustration later in the front-end I returned the predicted and historical data like this:
let returnObject = {
customerId: focusCustomer,
history: {
byDay: historyByDay, // e.g., bind to series “History”
byPizza: historyByPizza
},
predictions: {
rows: predsForCustomer, // raw rows if you want a details table
top: { day: topDay, pizza: topPizza }, // for highlight
byDay: predictedByDay, // e.g., 2nd series “Predicted” on the weekday chart
byPizza: predictedByPizza // e.g., overlay or stacked series on pizza chart
}
}
Visualize the results – The returned JSON I used as a basis to construct the graphical illustration in the front end. In the Handler.js I filled a JSON model with the data recieved from the backend (above JSON object) and wired the XML Fragement around it
Conclusion
Including RPT-1 in a CAP app seemed very smooth. Especially all the important information (like API Keys, API Route, Payload Structure) was easy to find. During the development I still had in the back of my mind when to use other SAP provided services which seems to be similar (like Data Attribute Reccomondation for instance). I know other services are more focused on the full ML life cycle (i have my own model, training iterations, etc.) but sometimes I am wondering with all the available services, which tool to choose from the big toolbox SAP provides.
From a first sight, I feel that with such a quick integration option like with RPT-1 I can cover 70-80% of use cases. At least to get accurate results in a very short time. Still I believe for data specific use cases, where I want to use my own trained model, Data Attribute Recommondation would be still my preferred option.
Please share your opinion as well for which use cases you use which kind of services
Since this is my first SAP Community article, I’d be happy to receive your feedback. Let me know in the comments how you see RPT-1 and the new doors it opens for enterprise AI.
Hi all,as you may have heared, SAP recently introduced RPT-1 (Relational Pre-Trained Transformer). To explore how well the model will work and if it is able to return different outputs, I integrated it into a small SAP CAP application. My choosen dataset contains historical data with sample customer pizza orders (order day, order time, pizza type, order quantity, etc.).The goal: predict each customer’s next order day and next pizza type 🍕.RPT-1 in Action A standard Fiori Elements list report displays the historical data that serves as input. When the user selects a customer and clicks “Trigger Prediction,” the CAP backend sends data to the RPT-1 API and receives the predicted results. These are then shown in charts within a dialog.Short program flow overview:CAPreads historical order data from (HANA-) DBBackend constructs the payload that RPT-1 expects. (API key and endpoint details can be found in the official documentation.)Model returns predicted columns (next_day, next_pizza). These fields must exist in the data structure as wellBackend merges historical data and prediction resultsFrontend visualizes both predicted sets using VizFrame chartsI found it very useful to try out RPT-1 in the Playground first before building a full application. Here you can experiment with your dataset and understand the prediction behavior as well as how you might need to adjust your data set to get the desired outcomes.If you are are curious like me how that works in CAP, I have listed my doings below:Set up your environment – Use SAP Business Application Studio (BAS) or VS Code with the SAP Fiori and CAP extensions installed. Create a new CAP project or a Full-Stack application using the Productivity ToolsPrepare your data – Create a CSV file with columns like customer, order_day, pizza_type, quantity, etc., and deploy it to your database (SQLite or SAP HANA using cds deploy –to hana / sqllite: db.sqliteGet access to RPT-1 – Use the API endpoint: https://rpt.cloud.sap/api/predict The API token is available on the SAP RPT-1 documentation page (after logging in with your S-user)Prepare the payload & Call the RPT-1 API – I tried to get the payload in the form like mentioned in the documentationconst historic = await SELECT.from(HistoricalData);
const trainRows = historic
// optionally focus payload on one customer:
.filter(r => !focusCustomer || r.customer_id === focusCustomer)
.map(r => ({
customer_id: r.customer_id,
pizza_name: r.pizza_name,
order_day: r.order_day,
next_order_day: r.next_order_day,
next_pizza_name: r.next_pizza_name
}));Screenshot from expected payload:Do not forget to provide the data set which you want to predict (normally that is all in your initial data set and marked with “[PREDICT]”, but to explicitly show it, I also hardcoded one line of data where I want to get the prediction results) const predictRows = [
{ customer_id: focusCustomer, pizza_name: “Margherita”, order_day: “Thu”, next_order_day: “[PREDICT]”, next_pizza_name: “[PREDICT]” }
]Constructed Payload in the service.js file const payload = {
rows: […trainRows, …predictRows],
index_column: “customer_id”
};Once these preparation steps are done , I used axios to call the above described API route (remember to get you API Token) try {
const response = await axios.post(API_URL, payload, {
headers: {
“Authorization”: `Bearer ${TOKEN}`,
“Content-Type”: “application/json”,
},
});
console.log(“Prediction result:”, JSON.stringify(response.data, null, 2));
} catch (error) {
console.error(“RPT-1 error:”, error.response?.data || error.message);
};
}Once the API call was successful, the API returns a json object (including your historical send data along with the prediction result). Here an extract of the payload I received: Prediction result: {
“prediction”: {
“id”: “782d5f80-0dd3-4b06-a589-985cfe5a41db”,
“metadata”: {
“num_columns”: 4,
“num_predict_rows”: 3,
“num_predict_tokens”: 6,
“num_rows”: 24
},
“predictions”: [
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Sat”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Seafood Special”
}
]
},
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Thu”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Margherita”
}
]
},
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Sun”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Margherita”
}
]
}
]
},
“delay”: 272.4773660004139,
“aiApiRequestPayload”: {
“prediction_config”: {
“target_columns”: [
{
“name”: “next_order_day”,
“placeholder_value”: “[PREDICT]”,
“task_type”: “classification”
},
{
“name”: “next_pizza_name”,
“placeholder_value”: “[PREDICT]”,
“task_type”: “classification”
}
]
},
“rows”: [ //rows which where present here in my historical data
{
“customer_id”: “C004”,
“pizza_name”: “Seafood Special”,
“order_day”: “Sun”,
“next_order_day”: “Sat”,
“next_pizza_name”: “Margherita”
}, … // more rowsIn case you care only about the preduction result (for further processing) you will find that in: predictions = response.data.prediction.predictions”predictions”: [
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Sat”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Seafood Special”
}
]
},
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Thu”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Margherita”
}
]
},
{
“customer_id”: “C004”,
“next_order_day”: [
{
“confidence”: null,
“prediction”: “Sun”
}
],
“next_pizza_name”: [
{
“confidence”: null,
“prediction”: “Margherita”
}
]
}
]Eventhough we have the predicted results, in most of the cases we want to combine the hostorical data with predicted results, to display a kind of graphical trend / relation. In my case, to get the shown graphs which could be the next potential order day and the next potential pizza, I combined the data in an object and returned it to my Handler.js file. To get the desired outcome within the graphical illustration later in the front-end I returned the predicted and historical data like this: let returnObject = {
customerId: focusCustomer,
history: {
byDay: historyByDay, // e.g., bind to series “History”
byPizza: historyByPizza
},
predictions: {
rows: predsForCustomer, // raw rows if you want a details table
top: { day: topDay, pizza: topPizza }, // for highlight
byDay: predictedByDay, // e.g., 2nd series “Predicted” on the weekday chart
byPizza: predictedByPizza // e.g., overlay or stacked series on pizza chart
}
}Visualize the results – The returned JSON I used as a basis to construct the graphical illustration in the front end. In the Handler.js I filled a JSON model with the data recieved from the backend (above JSON object) and wired the XML Fragement around itConclusion Including RPT-1 in a CAP app seemed very smooth. Especially all the important information (like API Keys, API Route, Payload Structure) was easy to find. During the development I still had in the back of my mind when to use other SAP provided services which seems to be similar (like Data Attribute Reccomondation for instance). I know other services are more focused on the full ML life cycle (i have my own model, training iterations, etc.) but sometimes I am wondering with all the available services, which tool to choose from the big toolbox SAP provides. From a first sight, I feel that with such a quick integration option like with RPT-1 I can cover 70-80% of use cases. At least to get accurate results in a very short time. Still I believe for data specific use cases, where I want to use my own trained model, Data Attribute Recommondation would be still my preferred option.Please share your opinion as well for which use cases you use which kind of servicesSince this is my first SAP Community article, I’d be happy to receive your feedback. Let me know in the comments how you see RPT-1 and the new doors it opens for enterprise AI. Read More Technology Blog Posts by Members articles
#SAP
#SAPTechnologyblog