This is part 3 of our blog series investigating how to reuse existing BW artefacts in BDC such as extractors and custom logic.
As organizations embrace cloud-native data architectures, the journey from SAP BW on HANA 7.5 to SAP Business Data Cloud (BDC) represents a critical modernization path. Our team recently completed a pilot implementation focusing on first, lifting BW into SAP BDC and modernizing it using SAP Datasphere as the primary target architecture.
This blog post shares practical insights and key learnings from this experience, particularly around handling standard extractors and preserving custom extractors business logic in SAP BDC.
I. The Challenge: Standard Extractors in a Modern Architecture
Think of standard SAP extractors as old filing systems that worked perfectly in traditional warehouses but struggle in modern cloud environments. Standard SAP extractors (2LIS_02_ITM, 0CO_OM_CCA_9, 2LIS_03_BF) present significant migration challenges when moving from traditional BW environments to SAP Business Data Cloud. This was primarily due to missing primary key definitions. SAP has modernized these extractors through SAP Datasphere, which acts as an intelligent intermediary that handles the complex transformation from legacy extraction patterns to cloud-native data products, enabling faster, more reliable data flows without requiring extensive custom coding.
Implementation Reality
With delta processing already active and productive in your Datasphere environment, the three critical extractors now operate seamlessly: 2LIS_02_ITM delivers purchase order changes, 0CO_OM_CCA_9 streams cost center postings, and 2LIS_03_BF provides material movements for non-cumulative inventory analysis. Datasphere eliminates traditional complexity by managing initialization, delta processing, and data semantics automatically, transforming raw SAP tables into business-ready entities (Purchase Orders, Cost Centers, Stock) that SAP Analytics Cloud can consume directly for analytics and planning scenarios. The same entities can also by consumed directly by SAP HANA Cloud for custom AI / ML scenarios or by end-to-end Intelligent Applications.
Design implications now that delta handling works
With delta mode active in Datasphere, your design simplifies and becomes more “cloud-native”:
Each extractor follows a clear pattern:Run initialization and delta in Replication Flow in Datasphere
Configure the extractor “primary key” in the Configure schema
Enable scheduled delta loads (e.g., hourly, daily, near real time).
New data is created in your source system (ECC or S4)
Deltas are replicated in Datasphere (table is filtered for the specific PO that was newly created)Operational benefits:No more large daily full loads.Network and storage usage is reduced because only changes move.Latency improves: BDC can be updated much more frequently.Monitoring focuses on delta queues and extractor status in Datasphere.
Practical checklist for your migration
When running this in a real project with delta fully available:
Per extractor:Confirm that initialization has been done successfully in Datasphere.Ensure delta jobs are configured, scheduled, and monitored.Align delta frequency with business needs (close to real time vs batch).In Datasphere:Model business keys clearly (e.g., PO number, item, company code; Controlling area, cost center, period).Build semantic models (views, business entities) that hide technical complexity from BDC consumers.Stay tuned for more details on this topic in the next blog post.Use lineage so others know which extractors feed which BDC data products.In BDC:Point your analytical models and stories to the Datasphere-based artifacts instead of direct raw tables.Design calculations assuming data is continuously refreshed via delta.
Beyond standard extractors, the real differentiation in a BDC migration comes from how you bring your custom fields and business logic in extractor enhancements into the new landscape.
II. How to Handle Custom Fields and Extractor Extensions in SAP Business Data Cloud, SAP Datasphere
Reusing a customer’s existing custom logic is essential in SAP because it preserves years of in-house business knowledge, avoids costly redevelopment, and accelerates modernization while keeping results from business outcomes without business disruption.
In our pilot, we showed how SAP Datasphere within SAP Business Data Cloud can consume SAP standard extractors enhanced by custom fields without redesigning the entire data flow from scratch. Using the 0FI_AP_4 extractor (which supports both full and delta loads) as an example, the steps below illustrate how to expose customized business rules in financial data end‑to‑end to BDC.
These steps are not meant to be a full tutorial on how to enhance a standard SAP extractor – that pattern has been around for decades – but we believe it is still useful to summarize the key activities here. We will also show how these enhanced extractors are replicated (initial load and delta) into SAP Datasphere, in SAP Business Data Cloud.
Check Existing Fields and Enhancement OptionsIn the source system, use RSA2 (or R0S2/RSA6 depending on release) to display DataSource 0FI_AP_4.Verify whether the fields you need are already present but hidden (this is common in standard extractors).If they are not available, plan to add them via an append structure and fill them using RSAP0001 or a BAdI implementation.Enhance the Extract Structure (Append Structure)Still in RSA2, navigate to the extract structure of 0FI_AP_4.Create an append structure and add your custom fields (usually prefixed with ZZ or YY).Assign correct data elements (taken from SE11 table definitions) and activate the append structure.
After activation, the new fields are physically part of the extract structure but not yet filled with data.
Implement the Extractor Logic (User Exit / BAdI)Create or reuse an enhancement project in CMOD and assign enhancement RSAP0001.For transactional data, implement component EXIT_SAPLRSAP_001 (include ZXRSAU01) and write the ABAP logic that populates the new fields for DataSource 0FI_AP_4.Use I_DATASOURCE in the exit to branch specifically on 0FI_AP_4.Read the necessary tables (e.g. BSEG, BKPF or customer tables) with the SELECT statement and loop over the extraction internal table to fill the custom fields using field symbols.Activate the enhancement project and the include program.
(Alternative: in newer systems you can use the dedicated BAdI for DataSource enhancement instead of RSAP0001, the principle for the logic remains the same.)
Unhide and Configure the Fields in RSA6Open RSA6 and choose the source system.Select DataSource 0FI_AP_4 in change mode.Locate the new fields and remove the Hide flag; if needed, adjust properties such as “field only known in exit”.Save and activate the DataSource so that the new fields become visible for extraction (RSA3) and for replication to BW/Datasphere.
If you extended an LO Cockpit DataSource, you might also need to check the extract structure and update/activate it via LBWE so that queue filling works correctly.
In our pilot, we have created a simple logic by assigning a single value (1000) to the controlling area.
Replicate and Consume in SAP Datasphere within SAP BDCIn SAP Datasphere, connect to the source system and activate the enhanced extractor 0FI_AP_4.Create a replication flow to run an initial full load and delta loads, now including the custom fields.Once the replication flow runs and creates the local table in Datasphere, the enhanced field has been replicated with its data automaticaly replicated.
This completes the chain from technical enhancement in S/4HANA (or ECC) to business‑ready data, analytics and AI in BDC, while following the standard BW enhancement process.
III. Next Steps: From data replication to answering business questions
Once extractors (standard and extended) are replicated into SAP Datasphere and exposed to BDC, the next step is to turn raw data into multi-dimensional models to provide data for analytical purposes to answer different business questions.
In SAP Datasphere, build Analytic Models on top of the replicated tables and datasources, defining measures, dimensions and hierarchies that fulfill business requirements. The latest update on the data product generator allows to semantically import BW 7.5 PCE InfoProvider data (works like semantic onboarding but is triggered from BW side and pushed into the object store of SAP Datasphere).Publish these models as consumption artifacts (expose analytic models/views, create and share data products) that can be consumed by SAP Analytics Cloud , SAP HANA Cloud apps and other BDC clients, tools and applications.In SAP Analytics Cloud, create stories, dashboards, and planning models that leverage the Datasphere models for live reporting, seamless planning and simulation.In SAP HANA Cloud, build high performance end-to-end intelligent applications that leverage custom AI / ML and multi-model capabilities for structured and unstructured data.
This way, the project does not stop at data replication: it delivers an end‑to‑end analytical platform where business users consume trusted, delta‑enabled data products in SAP Business Data Cloud for fact based decision support.
Conclusion
Our pilot project demonstrated that migrating from SAP BW7.5 on HANA to SAP Business Data Cloud using SAP Datasphere is not only feasible but can be done while preserving critical business logic and custom investments.
The key to success lies in thorough planning, understanding the current state of your extraction landscape, and leveraging the migration as an opportunity to modernize while preserving what works. As SAP continues to enhance BDC capabilities, organizations can confidently plan their migration journeys knowing that both standard and custom extraction scenarios are well-supported.
Stay tuned for the 4th blog post in this series, ‘Beyond Data Warehousing, AI-assisted migration of Business Logic from BW 7.5 to BDC Part 1’
This is part 3 of our blog series investigating how to reuse existing BW artefacts in BDC such as extractors and custom logic.As organizations embrace cloud-native data architectures, the journey from SAP BW on HANA 7.5 to SAP Business Data Cloud (BDC) represents a critical modernization path. Our team recently completed a pilot implementation focusing on first, lifting BW into SAP BDC and modernizing it using SAP Datasphere as the primary target architecture.This blog post shares practical insights and key learnings from this experience, particularly around handling standard extractors and preserving custom extractors business logic in SAP BDC.I. The Challenge: Standard Extractors in a Modern ArchitectureThink of standard SAP extractors as old filing systems that worked perfectly in traditional warehouses but struggle in modern cloud environments. Standard SAP extractors (2LIS_02_ITM, 0CO_OM_CCA_9, 2LIS_03_BF) present significant migration challenges when moving from traditional BW environments to SAP Business Data Cloud. This was primarily due to missing primary key definitions. SAP has modernized these extractors through SAP Datasphere, which acts as an intelligent intermediary that handles the complex transformation from legacy extraction patterns to cloud-native data products, enabling faster, more reliable data flows without requiring extensive custom coding.Implementation RealityWith delta processing already active and productive in your Datasphere environment, the three critical extractors now operate seamlessly: 2LIS_02_ITM delivers purchase order changes, 0CO_OM_CCA_9 streams cost center postings, and 2LIS_03_BF provides material movements for non-cumulative inventory analysis. Datasphere eliminates traditional complexity by managing initialization, delta processing, and data semantics automatically, transforming raw SAP tables into business-ready entities (Purchase Orders, Cost Centers, Stock) that SAP Analytics Cloud can consume directly for analytics and planning scenarios. The same entities can also by consumed directly by SAP HANA Cloud for custom AI / ML scenarios or by end-to-end Intelligent Applications.Design implications now that delta handling worksWith delta mode active in Datasphere, your design simplifies and becomes more “cloud-native”:Each extractor follows a clear pattern:Run initialization and delta in Replication Flow in Datasphere Configure the extractor “primary key” in the Configure schema Enable scheduled delta loads (e.g., hourly, daily, near real time). New data is created in your source system (ECC or S4) Deltas are replicated in Datasphere (table is filtered for the specific PO that was newly created)Operational benefits:No more large daily full loads.Network and storage usage is reduced because only changes move.Latency improves: BDC can be updated much more frequently.Monitoring focuses on delta queues and extractor status in Datasphere.Practical checklist for your migrationWhen running this in a real project with delta fully available:Per extractor:Confirm that initialization has been done successfully in Datasphere.Ensure delta jobs are configured, scheduled, and monitored.Align delta frequency with business needs (close to real time vs batch).In Datasphere:Model business keys clearly (e.g., PO number, item, company code; Controlling area, cost center, period).Build semantic models (views, business entities) that hide technical complexity from BDC consumers.Stay tuned for more details on this topic in the next blog post.Use lineage so others know which extractors feed which BDC data products.In BDC:Point your analytical models and stories to the Datasphere-based artifacts instead of direct raw tables.Design calculations assuming data is continuously refreshed via delta.Beyond standard extractors, the real differentiation in a BDC migration comes from how you bring your custom fields and business logic in extractor enhancements into the new landscape. II. How to Handle Custom Fields and Extractor Extensions in SAP Business Data Cloud, SAP DatasphereReusing a customer’s existing custom logic is essential in SAP because it preserves years of in-house business knowledge, avoids costly redevelopment, and accelerates modernization while keeping results from business outcomes without business disruption.In our pilot, we showed how SAP Datasphere within SAP Business Data Cloud can consume SAP standard extractors enhanced by custom fields without redesigning the entire data flow from scratch. Using the 0FI_AP_4 extractor (which supports both full and delta loads) as an example, the steps below illustrate how to expose customized business rules in financial data end‑to‑end to BDC.These steps are not meant to be a full tutorial on how to enhance a standard SAP extractor – that pattern has been around for decades – but we believe it is still useful to summarize the key activities here. We will also show how these enhanced extractors are replicated (initial load and delta) into SAP Datasphere, in SAP Business Data Cloud.Check Existing Fields and Enhancement OptionsIn the source system, use RSA2 (or R0S2/RSA6 depending on release) to display DataSource 0FI_AP_4.Verify whether the fields you need are already present but hidden (this is common in standard extractors).If they are not available, plan to add them via an append structure and fill them using RSAP0001 or a BAdI implementation.Enhance the Extract Structure (Append Structure)Still in RSA2, navigate to the extract structure of 0FI_AP_4.Create an append structure and add your custom fields (usually prefixed with ZZ or YY).Assign correct data elements (taken from SE11 table definitions) and activate the append structure.After activation, the new fields are physically part of the extract structure but not yet filled with data.Implement the Extractor Logic (User Exit / BAdI)Create or reuse an enhancement project in CMOD and assign enhancement RSAP0001.For transactional data, implement component EXIT_SAPLRSAP_001 (include ZXRSAU01) and write the ABAP logic that populates the new fields for DataSource 0FI_AP_4.Use I_DATASOURCE in the exit to branch specifically on 0FI_AP_4.Read the necessary tables (e.g. BSEG, BKPF or customer tables) with the SELECT statement and loop over the extraction internal table to fill the custom fields using field symbols.Activate the enhancement project and the include program.(Alternative: in newer systems you can use the dedicated BAdI for DataSource enhancement instead of RSAP0001, the principle for the logic remains the same.)Unhide and Configure the Fields in RSA6Open RSA6 and choose the source system.Select DataSource 0FI_AP_4 in change mode.Locate the new fields and remove the Hide flag; if needed, adjust properties such as “field only known in exit”.Save and activate the DataSource so that the new fields become visible for extraction (RSA3) and for replication to BW/Datasphere.If you extended an LO Cockpit DataSource, you might also need to check the extract structure and update/activate it via LBWE so that queue filling works correctly.In our pilot, we have created a simple logic by assigning a single value (1000) to the controlling area.Replicate and Consume in SAP Datasphere within SAP BDCIn SAP Datasphere, connect to the source system and activate the enhanced extractor 0FI_AP_4.Create a replication flow to run an initial full load and delta loads, now including the custom fields.Once the replication flow runs and creates the local table in Datasphere, the enhanced field has been replicated with its data automaticaly replicated.This completes the chain from technical enhancement in S/4HANA (or ECC) to business‑ready data, analytics and AI in BDC, while following the standard BW enhancement process.III. Next Steps: From data replication to answering business questionsOnce extractors (standard and extended) are replicated into SAP Datasphere and exposed to BDC, the next step is to turn raw data into multi-dimensional models to provide data for analytical purposes to answer different business questions.In SAP Datasphere, build Analytic Models on top of the replicated tables and datasources, defining measures, dimensions and hierarchies that fulfill business requirements. The latest update on the data product generator allows to semantically import BW 7.5 PCE InfoProvider data (works like semantic onboarding but is triggered from BW side and pushed into the object store of SAP Datasphere).Publish these models as consumption artifacts (expose analytic models/views, create and share data products) that can be consumed by SAP Analytics Cloud , SAP HANA Cloud apps and other BDC clients, tools and applications.In SAP Analytics Cloud, create stories, dashboards, and planning models that leverage the Datasphere models for live reporting, seamless planning and simulation.In SAP HANA Cloud, build high performance end-to-end intelligent applications that leverage custom AI / ML and multi-model capabilities for structured and unstructured data.This way, the project does not stop at data replication: it delivers an end‑to‑end analytical platform where business users consume trusted, delta‑enabled data products in SAP Business Data Cloud for fact based decision support.ConclusionOur pilot project demonstrated that migrating from SAP BW7.5 on HANA to SAP Business Data Cloud using SAP Datasphere is not only feasible but can be done while preserving critical business logic and custom investments.The key to success lies in thorough planning, understanding the current state of your extraction landscape, and leveraging the migration as an opportunity to modernize while preserving what works. As SAP continues to enhance BDC capabilities, organizations can confidently plan their migration journeys knowing that both standard and custom extraction scenarios are well-supported.Stay tuned for the 4th blog post in this series, ‘Beyond Data Warehousing, AI-assisted migration of Business Logic from BW 7.5 to BDC Part 1’ Read More Technology Blog Posts by SAP articles
#SAP
#SAPTechnologyblog