Handling large data sets efficiently is a common challenge in integration scenarios. In SAP Cloud Integration (SAP CI), pagination plays a vital role in retrieving data in manageable chunks, ensuring seamless performance and data consistency. In this blog series, I will explore how to implement pagination across four widely-used adapters: Salesforce, MS Dynamics, Workday, and Coupa.
We’ll begin with Salesforce, diving into practical examples and best practices that streamline data synchronization. Whether you’re dealing with customer records, transactional data, or employee information, mastering pagination will enhance your integration flows. Stay tuned as we navigate through each adapter, tackling real-world challenges and solutions.
Salesforce Adapter – Handling Large Data Sets with Custom Pagination
In this use case, we focus on fetching records from the Account entity in Salesforce, which contains 24k+ records. To achieve this, we use Salesforce Object Query Language (SOQL) to retrieve data efficiently. While SAP CI provides an Auto Pagination feature that retrieves all pages in a single call, this approach can lead to significant performance issues when dealing with large datasets.
To optimize performance and ensure seamless data retrieval, a custom pagination strategy is the recommended approach. By fetching records page by page in batch mode, we can prevent system overload, improve response times, and maintain control over data flow. In this blog, we will explore how to implement custom pagination in SAP CI for the Salesforce adapter, ensuring scalability and efficiency in your integrations.
Design Approach:
The scenario involves sending Account entity data from Salesforce to a target system that has a batch limit of 2,000 records per request. To accommodate this constraint, we implement a custom pagination approach using two Salesforce receiver adapter calls.
The first call fetches the initial page of records, while the second call operates within a looping process to retrieve subsequent pages. Each fetched page is passed through message mapping, ensuring seamless transformation before being delivered to the target system. This cycle continues—fetching a page, processing it, and sending it to the target—until all records are transferred, ensuring an efficient full end to end Batching processing and controlled data flow from Salesforce to the target system.
Design components elaboration:
Timer : This is a timer based flow which be a scheduled as per the user’s input
Content Modifier : “set properties” – This step is used to accept inputs such as EntityName, fieldnames, StartDate from the user, all these fields are externalized to make it dynamic.
Process Call : “Initiate Salesforce Call” – Once all the values are set, this process call is used to initiate the salesforce call
Request Reply : “Salesforce Receiver Adapter” – This step initiates the first call to salesforce to fetch the first page using SOQL query.
We are not digging into the connection tab in this blog, this blog focuses only on pagination concept.
Output:
In the response, we retrieve 2,000 records from the initial Salesforce call. The total record count is 24,446, and the response contains a <done>false</done> flag, indicating that more pages are available. Additionally, a <nextRecordsUrl> tag is provided, which serves as a reference for fetching the next batch of records.
To implement pagination, we leverage these two key fields—<done> and <nextRecordsUrl>—to construct the logic for fetching subsequent pages. As long as <done> remains false, the process continues, dynamically fetching the next set of records until all data is retrieved and processed efficiently.
Router : “Data?” –
A router is used to determine whether the fetched page contains any records. If records are present, the integration flow proceeds to process to next steps. However, if the page is empty, the flow terminates immediately, preventing unnecessary processing.
This check ensures optimized execution by stopping the pagination loop when there are no more records to fetch, improving efficiency and reducing unnecessary API calls.
Content Modifier : “Read nextURL” – This is a crucial step where are we reading two key fields—<done> and <nextRecordsUrl> using xpaths.
Process Call : “Message Mapping” – After this step, the page goes to message mapping and it is processed to the target, this step is target specific and this blog does not cover this part.
Router : “next batch ?” –
Identifying Next Page for Pagination
Once the first batch is successfully processed and sent to the target system, the design ensures that the integration flow moves to the next page. To achieve this, a router is used to check whether another page is available.
This check is performed using the “LoadDone” property, which was set in the previous content modifier. If more pages exist, the flow loops to fetch the next batch using the <nextRecordsUrl>. If no further pages are available, the process terminates, ensuring efficient data retrieval without unnecessary API calls.
Properties generated at the runtime for this run:
Looping Process Call : “Call next batch”
Looping Mechanism: Fetching next Pages
Once it is confirmed that more pages are available, the flow enters a looping process call to retrieve subsequent pages. This looping mechanism operates based on the condition:
${property.LoadDone} = “false”
As long as LoadDone remains “false”, the process continues fetching the next batch of records using the <nextRecordsUrl>. This ensures that each page is processed and sent to the target system sequentially.
Once all pages have been retrieved and processed, LoadDone is set to “true”**, signaling that no further pages exist. At this point, the loop terminates, marking the successful completion of data transfer.
In this looping process call, there is the second request-reply step with Salesforce Adapter which will help us to fetch the next pages.
For this call, we will make use of Operation SOQL – Execute Query for More Results and nextRecoredsURL which was set in the previous content modifier will be passed dynamically to the next “Next Records URL”
Output :
Page 2: This page has 2000 records as well, and done = false which means there are more pages.
Final Page Handling: Completing the Pagination Process
For a total of 24,446 records, the pagination process spans 13 pages:
First 12 pages contain 2,000 records each.Page 13 contains the remaining 446 records.
When the 13th page is fetched, the response contains:
done = true, indicating that this is the last batch.No <nextRecordsUrl>, confirming that no more pages are available.
At this point, the looping process terminates, ensuring that all records have been successfully retrieved and processed.
Page 13 Output:
The same Content Modifier : “Read nextURL” is used after this step to save the key properties LoadDone and nextRecordsURL.
The same Process Call : “Message Mapping” is used here to send this page to the Message Mapping and Send to Target LIP.
Full Run in TRACE:
Conclusion
Implementing custom pagination in SAP CI for the Salesforce adapter ensures efficient data retrieval while preventing performance bottlenecks. By leveraging looping mechanisms, routers, and conditional checks, we can seamlessly fetch and process large datasets in manageable batches. This approach not only optimizes API calls but also ensures smooth end-to-end data transfer. Stay tuned for the next part of this series, where we explore pagination for next adapters.
I hope this helps.
Cheers,
Punith Oswal
Handling large data sets efficiently is a common challenge in integration scenarios. In SAP Cloud Integration (SAP CI), pagination plays a vital role in retrieving data in manageable chunks, ensuring seamless performance and data consistency. In this blog series, I will explore how to implement pagination across four widely-used adapters: Salesforce, MS Dynamics, Workday, and Coupa.We’ll begin with Salesforce, diving into practical examples and best practices that streamline data synchronization. Whether you’re dealing with customer records, transactional data, or employee information, mastering pagination will enhance your integration flows. Stay tuned as we navigate through each adapter, tackling real-world challenges and solutions.Salesforce Adapter – Handling Large Data Sets with Custom PaginationIn this use case, we focus on fetching records from the Account entity in Salesforce, which contains 24k+ records. To achieve this, we use Salesforce Object Query Language (SOQL) to retrieve data efficiently. While SAP CI provides an Auto Pagination feature that retrieves all pages in a single call, this approach can lead to significant performance issues when dealing with large datasets.To optimize performance and ensure seamless data retrieval, a custom pagination strategy is the recommended approach. By fetching records page by page in batch mode, we can prevent system overload, improve response times, and maintain control over data flow. In this blog, we will explore how to implement custom pagination in SAP CI for the Salesforce adapter, ensuring scalability and efficiency in your integrations.Design Approach: The scenario involves sending Account entity data from Salesforce to a target system that has a batch limit of 2,000 records per request. To accommodate this constraint, we implement a custom pagination approach using two Salesforce receiver adapter calls.The first call fetches the initial page of records, while the second call operates within a looping process to retrieve subsequent pages. Each fetched page is passed through message mapping, ensuring seamless transformation before being delivered to the target system. This cycle continues—fetching a page, processing it, and sending it to the target—until all records are transferred, ensuring an efficient full end to end Batching processing and controlled data flow from Salesforce to the target system.Design components elaboration:Timer : This is a timer based flow which be a scheduled as per the user’s inputContent Modifier : “set properties” – This step is used to accept inputs such as EntityName, fieldnames, StartDate from the user, all these fields are externalized to make it dynamic. Process Call : “Initiate Salesforce Call” – Once all the values are set, this process call is used to initiate the salesforce callRequest Reply : “Salesforce Receiver Adapter” – This step initiates the first call to salesforce to fetch the first page using SOQL query.We are not digging into the connection tab in this blog, this blog focuses only on pagination concept.Output:In the response, we retrieve 2,000 records from the initial Salesforce call. The total record count is 24,446, and the response contains a <done>false</done> flag, indicating that more pages are available. Additionally, a <nextRecordsUrl> tag is provided, which serves as a reference for fetching the next batch of records.To implement pagination, we leverage these two key fields—<done> and <nextRecordsUrl>—to construct the logic for fetching subsequent pages. As long as <done> remains false, the process continues, dynamically fetching the next set of records until all data is retrieved and processed efficiently. Router : “Data?” -A router is used to determine whether the fetched page contains any records. If records are present, the integration flow proceeds to process to next steps. However, if the page is empty, the flow terminates immediately, preventing unnecessary processing.This check ensures optimized execution by stopping the pagination loop when there are no more records to fetch, improving efficiency and reducing unnecessary API calls. Content Modifier : “Read nextURL” – This is a crucial step where are we reading two key fields—<done> and <nextRecordsUrl> using xpaths.Process Call : “Message Mapping” – After this step, the page goes to message mapping and it is processed to the target, this step is target specific and this blog does not cover this part. Router : “next batch ?” -Identifying Next Page for PaginationOnce the first batch is successfully processed and sent to the target system, the design ensures that the integration flow moves to the next page. To achieve this, a router is used to check whether another page is available.This check is performed using the “LoadDone” property, which was set in the previous content modifier. If more pages exist, the flow loops to fetch the next batch using the <nextRecordsUrl>. If no further pages are available, the process terminates, ensuring efficient data retrieval without unnecessary API calls.Properties generated at the runtime for this run:Looping Process Call : “Call next batch”Looping Mechanism: Fetching next PagesOnce it is confirmed that more pages are available, the flow enters a looping process call to retrieve subsequent pages. This looping mechanism operates based on the condition:${property.LoadDone} = “false”As long as LoadDone remains “false”, the process continues fetching the next batch of records using the <nextRecordsUrl>. This ensures that each page is processed and sent to the target system sequentially.Once all pages have been retrieved and processed, LoadDone is set to “true”**, signaling that no further pages exist. At this point, the loop terminates, marking the successful completion of data transfer.In this looping process call, there is the second request-reply step with Salesforce Adapter which will help us to fetch the next pages. For this call, we will make use of Operation SOQL – Execute Query for More Results and nextRecoredsURL which was set in the previous content modifier will be passed dynamically to the next “Next Records URL”Output :Page 2: This page has 2000 records as well, and done = false which means there are more pages.Final Page Handling: Completing the Pagination ProcessFor a total of 24,446 records, the pagination process spans 13 pages:First 12 pages contain 2,000 records each.Page 13 contains the remaining 446 records.When the 13th page is fetched, the response contains:done = true, indicating that this is the last batch.No <nextRecordsUrl>, confirming that no more pages are available.At this point, the looping process terminates, ensuring that all records have been successfully retrieved and processed.Page 13 Output:The same Content Modifier : “Read nextURL” is used after this step to save the key properties LoadDone and nextRecordsURL.The same Process Call : “Message Mapping” is used here to send this page to the Message Mapping and Send to Target LIP.Full Run in TRACE: ConclusionImplementing custom pagination in SAP CI for the Salesforce adapter ensures efficient data retrieval while preventing performance bottlenecks. By leveraging looping mechanisms, routers, and conditional checks, we can seamlessly fetch and process large datasets in manageable batches. This approach not only optimizes API calls but also ensures smooth end-to-end data transfer. Stay tuned for the next part of this series, where we explore pagination for next adapters.I hope this helps.Cheers,Punith Oswal Read More Technology Blogs by Members articles
#SAP
#SAPTechnologyblog