Hi,
This content will help us to understand the major difference between Replication flow & Remote Table – Replication flow in one page
What connectors require ?
Replication flow : SAP Cloud connector can be used to connect ABAP Based systems
Remote Table Replication flow : DP Agent acts as bridge between Dataspehre and On premise system
Supportable source objects :
Replication flow : Allows the replication of multiple Assests (CDS Views as source) with in single flow.
Remote Table Replication flow : Individual data asset( CDS View / Table)
Data Target :
Replication flow : Data can load into both SAP Datasphere and External source systems as target systems
Remote Replication flow : Target is only SAP Datasphere
Data Loads :
Replication Flow we have two Options :
Initial Only : Loads all the data once from the source
Initial And Delta : After the Initial load System checks for source data changes at regular intervals and transfer delta to the target system
We can customize the interval you can customize this interval via the side panel, setting it to any value between 0-24 hours and 0-59 minutes
Remote Table Replication : We have two Modes
Snapshot replication: This method copies the full dataset from a source object (e.g., a database table of view) into SAP Datasphere in one go (ie. a static full load per run) Ex month end balances , Head counts
Real-Time Data Replication via “Enable Real-Time Data Replication” option Here, data changes from the source object are continuously replicated into SAP Datasphere, until the run is terminated manually/per schedule. replication frequency is system-defined and cannot be manually adjusted in SAP Datasphere
What is Partitioning and Parallelization
Partitioning and parallelizing the data transfers improves performance when replicating large datasets.
The approach varies depending on the source object type
ABAP source system, SLT-table, and CDS-view based data replication: Partitions are calculated automatically, but users can manually adjust them via SAP ABAP system parameters
ABAP source system, ODP based data replication: By default, the number of partitions is 3. But users can modify this via the ODP_RMS_PARTITIONS_LOAD parameter in SAP ABAP system.
Database source system, table-based replication: Partitions are automatically determined and cannot be modified.
Parllel processing :
A replication flow can contain a maximum of 500 replication objects.
By default, a replication flow can utilize up to two replication flow jobs
A replication flow job can process up to five work orders for data transfer and five work orders for housekeeping. (Work orders are internal objects that are needed to organize the work efficiently.)
If a replication object is divided into partitions, each partition is processed by one work order.
Each data transfer work order requires a thread, that is, a technical connection to both the source and target
Each replication flow counts towards the total possible maximum of 10 parallel jobs in one tenant.
if you have 10 replication flow jobs, you can process a total possible maximum of 50 work orders for data transfer and 50 work orders for housekeeping. With one thread per work order, the total maximum for threads is 100 for the source and for the target, respectively. (You can still have more than 10 replication flows, but not more than 10 replication flows can run in parallel at any given point in time.)
Sizing for Delta Loading
For delta loading, the default is that each object is processed as one partition by one data transfer work order, resulting in a total of 5 data transfer work orders and one replication flow job being utilized
The main intention of this blog is to clarify in which scenario we opt replication flow and Remote table replication flow. Utilization of them is different though they are identical in nature.
You can Add your thoughts or knowledge which i missed to list here
Best Regards,
Kartheek Kota
Hi,This content will help us to understand the major difference between Replication flow & Remote Table – Replication flow in one page What connectors require ?Replication flow : SAP Cloud connector can be used to connect ABAP Based systems Remote Table Replication flow : DP Agent acts as bridge between Dataspehre and On premise system Supportable source objects :Replication flow : Allows the replication of multiple Assests (CDS Views as source) with in single flow. Remote Table Replication flow : Individual data asset( CDS View / Table) Data Target : Replication flow : Data can load into both SAP Datasphere and External source systems as target systems Remote Replication flow : Target is only SAP Datasphere Data Loads : Replication Flow we have two Options : Initial Only : Loads all the data once from the source Initial And Delta : After the Initial load System checks for source data changes at regular intervals and transfer delta to the target system We can customize the interval you can customize this interval via the side panel, setting it to any value between 0-24 hours and 0-59 minutes Remote Table Replication : We have two Modes Snapshot replication: This method copies the full dataset from a source object (e.g., a database table of view) into SAP Datasphere in one go (ie. a static full load per run) Ex month end balances , Head counts Real-Time Data Replication via “Enable Real-Time Data Replication” option Here, data changes from the source object are continuously replicated into SAP Datasphere, until the run is terminated manually/per schedule. replication frequency is system-defined and cannot be manually adjusted in SAP Datasphere What is Partitioning and Parallelization Partitioning and parallelizing the data transfers improves performance when replicating large datasets. The approach varies depending on the source object type ABAP source system, SLT-table, and CDS-view based data replication: Partitions are calculated automatically, but users can manually adjust them via SAP ABAP system parameters ABAP source system, ODP based data replication: By default, the number of partitions is 3. But users can modify this via the ODP_RMS_PARTITIONS_LOAD parameter in SAP ABAP system. Database source system, table-based replication: Partitions are automatically determined and cannot be modified. Parllel processing : A replication flow can contain a maximum of 500 replication objects. By default, a replication flow can utilize up to two replication flow jobs A replication flow job can process up to five work orders for data transfer and five work orders for housekeeping. (Work orders are internal objects that are needed to organize the work efficiently.) If a replication object is divided into partitions, each partition is processed by one work order. Each data transfer work order requires a thread, that is, a technical connection to both the source and target Each replication flow counts towards the total possible maximum of 10 parallel jobs in one tenant. if you have 10 replication flow jobs, you can process a total possible maximum of 50 work orders for data transfer and 50 work orders for housekeeping. With one thread per work order, the total maximum for threads is 100 for the source and for the target, respectively. (You can still have more than 10 replication flows, but not more than 10 replication flows can run in parallel at any given point in time.) Sizing for Delta Loading For delta loading, the default is that each object is processed as one partition by one data transfer work order, resulting in a total of 5 data transfer work orders and one replication flow job being utilized The main intention of this blog is to clarify in which scenario we opt replication flow and Remote table replication flow. Utilization of them is different though they are identical in nature.You can Add your thoughts or knowledge which i missed to list here Best Regards,Kartheek Kota Read More Technology Blog Posts by Members articles
#SAP
#SAPTechnologyblog