Graphs are becoming a core part of how businesses understand their data. In SAP HANA Cloud, we already support strong property graph capabilities through the property graph engine, and a rich RDF/Knowledge Graph model through the knowledge graph engine. Until now, these two worlds lived separately.
With our new built-in graph procedure, we make it possible to turn a property graph into a knowledge graph (RDF) in one step. This gives customers the best of both worlds: the flexibility of property graphs and the semantic richness of RDF. This blog walks through the idea in simple terms and shows how it works behind the scenes.
Before we jump into details, let’s understand why we would want to convert a property graph to a knowledge graph:
Customers often start with a property graph because it is easy to model, easy to modify, and great for traversal and analytics.
Over time, they want additional capabilities like:
a standardized representation of entitiesricher semantic modelingSPARQL queryinginteroperability with external toolsgrounding their data for AI, LLMs, and RAG pipelines
Knowledge graphs shine here. By converting property graphs to the RDF format, teams can keep using their familiar structure while unlocking a wider ecosystem.
The new built-in procedure
SAP HANA Cloud introduces a simple, built-in procedure from its QRC4 2025 release: RDF_GRAPH_FROM_GRAPH_WORKSPACE. This procedure converts a property graph workspace into an RDF graph (knowledge graph).
It copies the graph data, but keeps the models independent. This means updates to one don’t automatically reflect in the other.
A simple supply chain graph example
To make this more tangible, let’s walk through a small but realistic example.
Imagine a supply chain graph with three main tables:
Suppliers – Each supplier has attributes like name, country, and risk level.Materials – Materials represent the goods being supplied.Shipments – Shipments connect suppliers to materials and carry business context such as shipment status and delay days.
In SAP HANA Cloud, we will first model this as a property graph workspace.
Suppliers and materials are vertex tablesShipments are an edge table connecting suppliers to materialsShipment specific attributes remain attached to the edge
This structure is very natural for operational and analytical use cases. You can easily answer questions like:
Which suppliers provide a given material?Which suppliers are associated with delayed shipments?How does risk propagate through the supply network?
Creating the property graph
At the data level, the graph is built on top of relational tables. Suppliers and materials are modeled as vertex tables, while shipments act as the edge table connecting them. Once the tables are defined, a graph workspace ties everything together:
Step 1: Let’s begin with creating plain relational tables.
Suppliers
CREATE SCHEMA SCM;
SET SCHEMA SCM;
CREATE TABLE SCM.SUPPLIERS (
SUPPLIERID NVARCHAR(50) PRIMARY KEY,
NAME NVARCHAR(100),
COUNTRY NVARCHAR(50),
RISKLEVEL NVARCHAR(20)
);
Materials
CREATE TABLE SCM.MATERIALS (
MATERIALID NVARCHAR(50) PRIMARY KEY,
NAME NVARCHAR(100)
);
Shipments (relationship data)
CREATE TABLE SCM.SHIPMENTS (
SHIPMENTID NVARCHAR(50) PRIMARY KEY,
SUPPLIER NVARCHAR(50) NOT NULL,
MATERIAL NVARCHAR(50) NOT NULL,
STATUS NVARCHAR(20),
DELAYDAYS INTEGER
);
Step 2: Insert sample data in the created tables
–Suppliers
INSERT INTO SCM.SUPPLIERS (SupplierID, Name, Country, RiskLevel)
VALUES (‘S1’, ‘Alpha Metals’, ‘China’, ‘High’);
INSERT INTO SCM.SUPPLIERS (SupplierID, Name, Country, RiskLevel)
VALUES (‘S2’, ‘Nordic Steel’, ‘Sweden’, ‘Low’);
INSERT INTO SCM.SUPPLIERS (SupplierID, Name, Country, RiskLevel)
VALUES (‘S3’, ‘TerraChem’, ‘Brazil’, ‘Medium’);
–Materials
INSERT INTO SCM.MATERIALS (MaterialID, Name)
VALUES (‘M1’, ‘Steel Rods’);
INSERT INTO SCM.MATERIALS (MaterialID, Name)
VALUES (‘M2’, ‘Copper Wire’);
–Shipments
INSERT INTO SCM.SHIPMENTS (ShipmentID, Supplier, Material, Status, DelayDays)
VALUES (‘SH1’, ‘S1’, ‘M1’, ‘Delayed’, 12);
INSERT INTO SCM.SHIPMENTS (ShipmentID, Supplier, Material, Status, DelayDays)
VALUES (‘SH2’, ‘S2’, ‘M1’, ‘OnTime’, 0);
INSERT INTO SCM.SHIPMENTS (ShipmentID, Supplier, Material, Status, DelayDays)
VALUES (‘SH3’, ‘S3’, ‘M2’, ‘Delayed’, 5);
At this point, the data is still relational.
Step 3: Create a property graph workspace
Now we turn the above tables into a graph workspace – SUPPLY_GRAPH.
CREATE GRAPH WORKSPACE SCM.SUPPLY_GRAPH
VERTEX TABLE SCM.SUPPLIERS
KEY SUPPLIERID
VERTEX TABLE SCM.MATERIALS
KEY MATERIALID
EDGE TABLE SCM.SHIPMENTS
KEY SHIPMENTID
SOURCE SUPPLIER REFERENCES SCM.SUPPLIERS
TARGET MATERIAL REFERENCES SCM.MATERIALS;
What this gives us:
VerticesSuppliers with properties like name, country, risk levelMaterials with their attributesEdgesShipments connecting suppliers to materialsEdge properties such as status and delay days
This graph is optimized for traversal and graph algorithms.
Step 4: What’s new in QRC4 2025 – Property Graph (PG) to Knowledge Graph (KG) conversion
This is where the new functionality – RDF_GRAPH_FROM_GRAPH_WORKSPACE comes in. It converts an existing graph workspace into an RDF knowledge graph – ‘scmkg’.
The conversion call :
CALL RDF_GRAPH_FROM_GRAPH_WORKSPACE(
GRAPH_WORKSPACE_SCHEMA_NAME => ‘SCM’,
GRAPH_WORKSPACE_NAME => ‘SUPPLY_GRAPH’,
TARGET_GRAPH_URI => ‘<scmkg>’,
MAPPING_MODE => ‘AUTO’,
MAPPING_OUT => ?
);
Step 5: How the conversion works (in simple terms)
The conversion follows a few clear rules.
Vertices become RDF triples – Each vertex is converted into multiple RDF triples.
The vertex key becomes the subjectEach attribute becomes a predicateAttribute values become objects
For example, a supplier like Alpha Metals becomes a set of triples such as:
<http://SCM/SUPPLIERS/SUPPLIERID/S1> <http://SCM/SUPPLIERS/NAME> “Alpha Metals” .
<http://SCM/SUPPLIERS/SUPPLIERID/S1> <http://SCM/SUPPLIERS/COUNTRY> “China” .
<http://SCM/SUPPLIERS/SUPPLIERID/S1> <http://SCM/SUPPLIERS/RISKLEVEL> “High” .
This directly mirrors the rows in the vertex table.
Vertex types become rdf:type – Vertex labels are converted into RDF classes.
Conceptually, this means:
<Suppliers/S1> rdf:type <Supplier> .
<Materials/M1> rdf:type <Material> .
This adds semantic meaning and allows tools and queries to reason about what kind of thing each node represents.
Step 6: Mapping modes and URI control
By default, the system generates URIs automatically. These are predictable, but often not ideal for long term semantic use. To give users full control, the conversion supports an optional mapping table, where you can define: schema, table, attribute, RDF URI, data type, optional language tags.
Using a mapping table allows customers to:
apply meaningful, business-friendly URIsalign with existing vocabulariesstandardize naming across systems
If a mapping table is not provided, the system falls back to sensible defaults:
URIs are generated in the form – https://localhost/<schema>/<table>/<attribute>Vertex labels default to table namesUnmapped attributes are handled based on the selected mapping mode
This makes it easy to get started without extra configuration.
Mapping modes that fit different customer needs
The conversion supports multiple modes, depending on how strict or exploratory the user wants to be. These modes are also described in the official documentation.
AUTO (default) – Uses mapping table entries when provided, otherwise generates system URIs.MAPPING_ONLY – Generates mapping output without creating RDF. Useful for validation.IGNORE – Converts only attributes explicitly listed in the mapping table.STRICT – Requires all attributes to be mapped. Fails if anything is missing.
You can also supply a mapping table to assign clean, meaningful URIs and prefixes, including standard vocabularies like FOAF. An out-mapping table is generated automatically so users can inspect exactly how attributes were mapped.
Step 7: Query the knowledge graph with SPARQL
Once the RDF graph is created, it can be queried directly inside SAP HANA Cloud using SPARQL_TABLE or SPARQL_EXECUTE functions.
View the RDF graph created:
SELECT *
FROM SPARQL_TABLE(‘
PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX ex:<http://example.org/scm/>
SELECT ?subject ?predicate ?obj
FROM <scmkg>
WHERE {
?subject ?predicate ?obj .
}
‘);
The below SPARQL query retrieves which suppliers ship which materials by traversing shipment relationships in the converted knowledge graph.
SELECT *
FROM SPARQL_TABLE(‘
SELECT DISTINCT ?supplierName ?materialName
FROM <scmkg>
WHERE {
# ensure the embedded shipment triple exists
FILTER EXISTS {
<< ?supplier <http://SCM/SHIPMENTS/SHIPMENTS> ?material >> ?p ?o .
}
# supplier attributes
?supplier <http://SCM/SUPPLIERS/NAME> ?supplierName .
# material attributes
?material <http://SCM/MATERIALS/NAME> ?materialName .
}
‘);
The same data you modeled as a property graph is now available through standard SPARQL queries.
Key benefits for customers:
Standardized representation – RDF makes data interoperable across tools, platforms, and organizational boundaries.Richer semantics – Customers can define precise meanings through URIs, classes, and vocabularies.Easy integration with AI – Knowledge graphs are becoming foundational for RAG systems and reasoning. This conversion gives customers a clean way to prepare their graph data for LLM based applications.No need to duplicate data models – Users start with property graphs and layer semantics later as needed.Works with existing HANA Cloud graph workspaces – Nothing special is required on the modeling side.
Looking ahead
This blog is intended as an introduction and starting point for the new Property Graph to Knowledge Graph conversion capability in SAP HANA Cloud. It represents the first step toward deeper interoperability between the property graph engine and the knowledge graph engine. The long-term vision is to reduce barriers between the two and allow users to move freely between graph patterns as their needs evolve. On our roadmap, we also plan to support conversion in the opposite direction, from knowledge graphs back to property graphs, helping to close the loop.
As graph workloads continue to grow inside SAP HANA Cloud, customers increasingly want both modeling flexibility and semantic power. With this new PG to KG conversion feature, we provide an easy entry point into the world of knowledge graphs without changing how data is modeled or stored today.
This blog focuses on introducing the core concepts and mechanics in simpler terms. In a follow-up blog, I look forward to diving deeper into a real-world use case example and exploring advanced mappings and scenarios that this capability is designed to handle.
Graphs are becoming a core part of how businesses understand their data. In SAP HANA Cloud, we already support strong property graph capabilities through the property graph engine, and a rich RDF/Knowledge Graph model through the knowledge graph engine. Until now, these two worlds lived separately. With our new built-in graph procedure, we make it possible to turn a property graph into a knowledge graph (RDF) in one step. This gives customers the best of both worlds: the flexibility of property graphs and the semantic richness of RDF. This blog walks through the idea in simple terms and shows how it works behind the scenes.Before we jump into details, let’s understand why we would want to convert a property graph to a knowledge graph:Customers often start with a property graph because it is easy to model, easy to modify, and great for traversal and analytics.Over time, they want additional capabilities like:a standardized representation of entitiesricher semantic modelingSPARQL queryinginteroperability with external toolsgrounding their data for AI, LLMs, and RAG pipelinesKnowledge graphs shine here. By converting property graphs to the RDF format, teams can keep using their familiar structure while unlocking a wider ecosystem.The new built-in procedureSAP HANA Cloud introduces a simple, built-in procedure from its QRC4 2025 release: RDF_GRAPH_FROM_GRAPH_WORKSPACE. This procedure converts a property graph workspace into an RDF graph (knowledge graph).It copies the graph data, but keeps the models independent. This means updates to one don’t automatically reflect in the other.A simple supply chain graph exampleTo make this more tangible, let’s walk through a small but realistic example.Imagine a supply chain graph with three main tables:Suppliers – Each supplier has attributes like name, country, and risk level.Materials – Materials represent the goods being supplied.Shipments – Shipments connect suppliers to materials and carry business context such as shipment status and delay days.In SAP HANA Cloud, we will first model this as a property graph workspace.Suppliers and materials are vertex tablesShipments are an edge table connecting suppliers to materialsShipment specific attributes remain attached to the edgeThis structure is very natural for operational and analytical use cases. You can easily answer questions like:Which suppliers provide a given material?Which suppliers are associated with delayed shipments?How does risk propagate through the supply network?Creating the property graphAt the data level, the graph is built on top of relational tables. Suppliers and materials are modeled as vertex tables, while shipments act as the edge table connecting them. Once the tables are defined, a graph workspace ties everything together:Step 1: Let’s begin with creating plain relational tables.SuppliersCREATE SCHEMA SCM;
SET SCHEMA SCM;
CREATE TABLE SCM.SUPPLIERS (
SUPPLIERID NVARCHAR(50) PRIMARY KEY,
NAME NVARCHAR(100),
COUNTRY NVARCHAR(50),
RISKLEVEL NVARCHAR(20)
);MaterialsCREATE TABLE SCM.MATERIALS (
MATERIALID NVARCHAR(50) PRIMARY KEY,
NAME NVARCHAR(100)
);Shipments (relationship data)CREATE TABLE SCM.SHIPMENTS (
SHIPMENTID NVARCHAR(50) PRIMARY KEY,
SUPPLIER NVARCHAR(50) NOT NULL,
MATERIAL NVARCHAR(50) NOT NULL,
STATUS NVARCHAR(20),
DELAYDAYS INTEGER
);Step 2: Insert sample data in the created tables–Suppliers
INSERT INTO SCM.SUPPLIERS (SupplierID, Name, Country, RiskLevel)
VALUES (‘S1’, ‘Alpha Metals’, ‘China’, ‘High’);
INSERT INTO SCM.SUPPLIERS (SupplierID, Name, Country, RiskLevel)
VALUES (‘S2’, ‘Nordic Steel’, ‘Sweden’, ‘Low’);
INSERT INTO SCM.SUPPLIERS (SupplierID, Name, Country, RiskLevel)
VALUES (‘S3’, ‘TerraChem’, ‘Brazil’, ‘Medium’);
–Materials
INSERT INTO SCM.MATERIALS (MaterialID, Name)
VALUES (‘M1’, ‘Steel Rods’);
INSERT INTO SCM.MATERIALS (MaterialID, Name)
VALUES (‘M2’, ‘Copper Wire’);
–Shipments
INSERT INTO SCM.SHIPMENTS (ShipmentID, Supplier, Material, Status, DelayDays)
VALUES (‘SH1’, ‘S1’, ‘M1’, ‘Delayed’, 12);
INSERT INTO SCM.SHIPMENTS (ShipmentID, Supplier, Material, Status, DelayDays)
VALUES (‘SH2’, ‘S2’, ‘M1’, ‘OnTime’, 0);
INSERT INTO SCM.SHIPMENTS (ShipmentID, Supplier, Material, Status, DelayDays)
VALUES (‘SH3’, ‘S3’, ‘M2’, ‘Delayed’, 5);At this point, the data is still relational.Step 3: Create a property graph workspaceNow we turn the above tables into a graph workspace – SUPPLY_GRAPH.CREATE GRAPH WORKSPACE SCM.SUPPLY_GRAPH
VERTEX TABLE SCM.SUPPLIERS
KEY SUPPLIERID
VERTEX TABLE SCM.MATERIALS
KEY MATERIALID
EDGE TABLE SCM.SHIPMENTS
KEY SHIPMENTID
SOURCE SUPPLIER REFERENCES SCM.SUPPLIERS
TARGET MATERIAL REFERENCES SCM.MATERIALS;What this gives us:VerticesSuppliers with properties like name, country, risk levelMaterials with their attributesEdgesShipments connecting suppliers to materialsEdge properties such as status and delay daysThis graph is optimized for traversal and graph algorithms.Step 4: What’s new in QRC4 2025 – Property Graph (PG) to Knowledge Graph (KG) conversionThis is where the new functionality – RDF_GRAPH_FROM_GRAPH_WORKSPACE comes in. It converts an existing graph workspace into an RDF knowledge graph – ‘scmkg’.The conversion call :CALL RDF_GRAPH_FROM_GRAPH_WORKSPACE(
GRAPH_WORKSPACE_SCHEMA_NAME => ‘SCM’,
GRAPH_WORKSPACE_NAME => ‘SUPPLY_GRAPH’,
TARGET_GRAPH_URI => ‘<scmkg>’,
MAPPING_MODE => ‘AUTO’,
MAPPING_OUT => ?
);Step 5: How the conversion works (in simple terms)The conversion follows a few clear rules.Vertices become RDF triples – Each vertex is converted into multiple RDF triples.The vertex key becomes the subjectEach attribute becomes a predicateAttribute values become objectsFor example, a supplier like Alpha Metals becomes a set of triples such as:<http://SCM/SUPPLIERS/SUPPLIERID/S1> <http://SCM/SUPPLIERS/NAME> “Alpha Metals” .
<http://SCM/SUPPLIERS/SUPPLIERID/S1> <http://SCM/SUPPLIERS/COUNTRY> “China” .
<http://SCM/SUPPLIERS/SUPPLIERID/S1> <http://SCM/SUPPLIERS/RISKLEVEL> “High” .This directly mirrors the rows in the vertex table.Vertex types become rdf:type – Vertex labels are converted into RDF classes.Conceptually, this means:<Suppliers/S1> rdf:type <Supplier> .
<Materials/M1> rdf:type <Material> .This adds semantic meaning and allows tools and queries to reason about what kind of thing each node represents.Step 6: Mapping modes and URI controlBy default, the system generates URIs automatically. These are predictable, but often not ideal for long term semantic use. To give users full control, the conversion supports an optional mapping table, where you can define: schema, table, attribute, RDF URI, data type, optional language tags.Using a mapping table allows customers to:apply meaningful, business-friendly URIsalign with existing vocabulariesstandardize naming across systemsIf a mapping table is not provided, the system falls back to sensible defaults:URIs are generated in the form – https://localhost/<schema>/<table>/<attribute>Vertex labels default to table namesUnmapped attributes are handled based on the selected mapping modeThis makes it easy to get started without extra configuration.Mapping modes that fit different customer needsThe conversion supports multiple modes, depending on how strict or exploratory the user wants to be. These modes are also described in the official documentation. AUTO (default) – Uses mapping table entries when provided, otherwise generates system URIs.MAPPING_ONLY – Generates mapping output without creating RDF. Useful for validation.IGNORE – Converts only attributes explicitly listed in the mapping table.STRICT – Requires all attributes to be mapped. Fails if anything is missing. You can also supply a mapping table to assign clean, meaningful URIs and prefixes, including standard vocabularies like FOAF. An out-mapping table is generated automatically so users can inspect exactly how attributes were mapped.Step 7: Query the knowledge graph with SPARQLOnce the RDF graph is created, it can be queried directly inside SAP HANA Cloud using SPARQL_TABLE or SPARQL_EXECUTE functions.View the RDF graph created:SELECT *
FROM SPARQL_TABLE(‘
PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX ex:<http://example.org/scm/>
SELECT ?subject ?predicate ?obj
FROM <scmkg>
WHERE {
?subject ?predicate ?obj .
}
‘);The below SPARQL query retrieves which suppliers ship which materials by traversing shipment relationships in the converted knowledge graph.SELECT *
FROM SPARQL_TABLE(‘
SELECT DISTINCT ?supplierName ?materialName
FROM <scmkg>
WHERE {
# ensure the embedded shipment triple exists
FILTER EXISTS {
<< ?supplier <http://SCM/SHIPMENTS/SHIPMENTS> ?material >> ?p ?o .
}
# supplier attributes
?supplier <http://SCM/SUPPLIERS/NAME> ?supplierName .
# material attributes
?material <http://SCM/MATERIALS/NAME> ?materialName .
}
‘);The same data you modeled as a property graph is now available through standard SPARQL queries.Key benefits for customers:Standardized representation – RDF makes data interoperable across tools, platforms, and organizational boundaries.Richer semantics – Customers can define precise meanings through URIs, classes, and vocabularies.Easy integration with AI – Knowledge graphs are becoming foundational for RAG systems and reasoning. This conversion gives customers a clean way to prepare their graph data for LLM based applications.No need to duplicate data models – Users start with property graphs and layer semantics later as needed.Works with existing HANA Cloud graph workspaces – Nothing special is required on the modeling side.Looking aheadThis blog is intended as an introduction and starting point for the new Property Graph to Knowledge Graph conversion capability in SAP HANA Cloud. It represents the first step toward deeper interoperability between the property graph engine and the knowledge graph engine. The long-term vision is to reduce barriers between the two and allow users to move freely between graph patterns as their needs evolve. On our roadmap, we also plan to support conversion in the opposite direction, from knowledge graphs back to property graphs, helping to close the loop.As graph workloads continue to grow inside SAP HANA Cloud, customers increasingly want both modeling flexibility and semantic power. With this new PG to KG conversion feature, we provide an easy entry point into the world of knowledge graphs without changing how data is modeled or stored today.This blog focuses on introducing the core concepts and mechanics in simpler terms. In a follow-up blog, I look forward to diving deeper into a real-world use case example and exploring advanced mappings and scenarios that this capability is designed to handle. Read More Technology Blog Posts by SAP articles
#SAP
#SAPTechnologyblog