The Problem: A Silent Database Killer
During a routine database health check, we discovered something alarming — the ARFCSDATA table had grown to a size that was consuming a significant portion of our SAP system’s database. The total entry count had ballooned to over 847 million rows, with a staggering 827 million of those entries belonging to a single RFC destination: <BWSID>CLNT100.
This kind of uncontrolled table growth is one of those issues that develops silently over months, often going unnoticed until it starts impacting system performance or storage costs.
Understanding ARFCSDATA and ARFCSSTATE
Before diving into the root cause, it helps to understand what these tables do:
ARFCSSTATE — Stores the state/header records for transactional RFC (tRFC) and queued RFC (qRFC) calls. Each entry represents a Logical Unit of Work (LUW) identified by a Transaction ID (TID).ARFCSDATA — Stores the actual payload data for each tRFC/qRFC LUW. A single TID in ARFCSSTATE can correspond to multiple entries in ARFCSDATA depending on the number of function modules called within that LUW.
Under normal conditions, these entries are created, processed, and then cleaned up automatically. When cleanup fails — typically because the target system never acknowledges receipt — the data accumulates indefinitely.
Root Cause Analysis: The SQL Investigation
To find out what was causing the bloat, we ran a diagnostic query joining ARFCSSTATE and ARFCSDATA:
FROM arfcsstate AS aLEFT JOIN arfcsdata AS b ON a.arfcipid = b.arfcipid AND a.arfcpid = b.arfcpid AND a.arfctime = b.arfctime AND a.arfctidcnt = b.arfctidcntGROUP BY a.arfcipid, a.arfcpid, a.arfctime, a.arfctidcntORDER BY COUNT(a.arfcipid) DESC
Query Result:
The results were eye-opening. The top single TID entry had over 3.6 million corresponding rows in ARFCSDATA. The top 30 entries alone accounted for hundreds of millions of rows, with most individual TIDs carrying between 3.1 million and 3.7 million data entries each.
We extracted the top 350 TIDs from ARFCSSTATE for further analysis. The findings were consistent: all these entries pointed to the BW (Business Warehouse) system destination, and the username associated with them was used for data extraction processes.
This pointed directly to a BW extraction scenario where tRFC calls from the source system (ECC) to the BW system were not being acknowledged and cleaned up, causing both the state and data tables to fill up with stale records.
SM58: The Smoking Gun
A check of SM58 (tRFC Monitor) on the ECC system confirmed the diagnosis. We found approximately 2,200 stuck tRFC entries, accumulated over an extended period:
Some were in Executing status (stuck mid-process)Some were in Failed status (target system not reachable or rejecting calls)As of 15th March 2026, there were 1,874 stuck entries for the BW System destination alone
These stuck LUWs explained everything. Each failed or stuck tRFC entry retained its payload in ARFCSDATA indefinitely, and as BW extractions ran repeatedly over time, the number of stranded entries multiplied into the hundreds of millions.
The Fix: Scheduling RSARFCER
The resolution was to schedule the standard SAP report RSARFCER in the ECC system as a background job.
RSARFCER is SAP’s built-in cleanup program for tRFC/qRFC data. It identifies and removes LUW entries that have been successfully processed or that have exceeded the configured retention period. Running this regularly is a recommended best practice but is often overlooked in busy SAP landscapes.
Tip: Schedule RSARFCER as a periodic batch job (daily or weekly depending on your RFC volume) to prevent ARFCSDATA from growing out of control.
The Results: 1.7 TB → 310 GB
The impact of the cleanup was dramatic:
MetricBefore CleanupAfter CleanupTotal DB Size~1.7 TB310 GBARFCSDATA Entries84,76,26,71814,40,51,765BW System Entries82,79,01,049Significantly reduced
The table size shrank by over 80%, freeing up more than 1.4 TB of storage — all from addressing a single table’s accumulated stale data.
Key Takeaways
1. Monitor ARFCSDATA size proactively. Add ARFCSDATA (and ARFCSSTATE) to your regular table size monitoring. A sudden or sustained growth trend is an early warning sign of stuck tRFCs.
2. Check SM58 regularly. Stuck entries in SM58 are not just a functional problem — they are a direct driver of database bloat. Investigate and resolve them as part of your regular Basis housekeeping.
3. Schedule RSARFCER as a recurring job. This is the primary housekeeping tool for tRFC/qRFC data. It should be running on a schedule in every SAP system that uses RFC-based integrations — especially source systems feeding BW or other downstream platforms.
4. Pay attention to BW extraction RFC destinations. High-frequency BW data extractions are a common source of tRFC accumulation, particularly if the BW system experiences downtime or connectivity issues during extraction windows.
5. Investigate TID-level anomalies. If a single TID is generating millions of ARFCSDATA entries, something is fundamentally wrong with that LUW — either the extraction program is looping, or there is a data design issue creating an unusually high number of RFC calls within a single transaction.
Final Thought
ARFCSDATA bloat is one of the more underappreciated causes of unexpected database growth in SAP systems. It doesn’t cause obvious application errors — it just quietly consumes space until it becomes a serious infrastructure problem. With the right monitoring and a simple scheduled job, it’s entirely preventable.
If your ARFCSDATA table is growing unchecked, start with SM58 and RSARFCER — chances are, that’s where the story begins and ends.
The Problem: A Silent Database KillerDuring a routine database health check, we discovered something alarming — the ARFCSDATA table had grown to a size that was consuming a significant portion of our SAP system’s database. The total entry count had ballooned to over 847 million rows, with a staggering 827 million of those entries belonging to a single RFC destination: <BWSID>CLNT100.This kind of uncontrolled table growth is one of those issues that develops silently over months, often going unnoticed until it starts impacting system performance or storage costs. Understanding ARFCSDATA and ARFCSSTATEBefore diving into the root cause, it helps to understand what these tables do:ARFCSSTATE — Stores the state/header records for transactional RFC (tRFC) and queued RFC (qRFC) calls. Each entry represents a Logical Unit of Work (LUW) identified by a Transaction ID (TID).ARFCSDATA — Stores the actual payload data for each tRFC/qRFC LUW. A single TID in ARFCSSTATE can correspond to multiple entries in ARFCSDATA depending on the number of function modules called within that LUW.Under normal conditions, these entries are created, processed, and then cleaned up automatically. When cleanup fails — typically because the target system never acknowledges receipt — the data accumulates indefinitely.Root Cause Analysis: The SQL InvestigationTo find out what was causing the bloat, we ran a diagnostic query joining ARFCSSTATE and ARFCSDATA: sql – Reference note 2899366.SELECT a.arfcipid, a.arfcpid, a.arfctime, a.arfctidcnt, COUNT(*)
FROM arfcsstate AS aLEFT JOIN arfcsdata AS b ON a.arfcipid = b.arfcipid AND a.arfcpid = b.arfcpid AND a.arfctime = b.arfctime AND a.arfctidcnt = b.arfctidcntGROUP BY a.arfcipid, a.arfcpid, a.arfctime, a.arfctidcntORDER BY COUNT(a.arfcipid) DESCQuery Result:The results were eye-opening. The top single TID entry had over 3.6 million corresponding rows in ARFCSDATA. The top 30 entries alone accounted for hundreds of millions of rows, with most individual TIDs carrying between 3.1 million and 3.7 million data entries each.We extracted the top 350 TIDs from ARFCSSTATE for further analysis. The findings were consistent: all these entries pointed to the BW (Business Warehouse) system destination, and the username associated with them was used for data extraction processes.This pointed directly to a BW extraction scenario where tRFC calls from the source system (ECC) to the BW system were not being acknowledged and cleaned up, causing both the state and data tables to fill up with stale records.SM58: The Smoking GunA check of SM58 (tRFC Monitor) on the ECC system confirmed the diagnosis. We found approximately 2,200 stuck tRFC entries, accumulated over an extended period:Some were in Executing status (stuck mid-process)Some were in Failed status (target system not reachable or rejecting calls)As of 15th March 2026, there were 1,874 stuck entries for the BW System destination aloneThese stuck LUWs explained everything. Each failed or stuck tRFC entry retained its payload in ARFCSDATA indefinitely, and as BW extractions ran repeatedly over time, the number of stranded entries multiplied into the hundreds of millions.The Fix: Scheduling RSARFCERThe resolution was to schedule the standard SAP report RSARFCER in the ECC system as a background job.RSARFCER is SAP’s built-in cleanup program for tRFC/qRFC data. It identifies and removes LUW entries that have been successfully processed or that have exceeded the configured retention period. Running this regularly is a recommended best practice but is often overlooked in busy SAP landscapes.Tip: Schedule RSARFCER as a periodic batch job (daily or weekly depending on your RFC volume) to prevent ARFCSDATA from growing out of control.The Results: 1.7 TB → 310 GBThe impact of the cleanup was dramatic:MetricBefore CleanupAfter CleanupTotal DB Size~1.7 TB310 GBARFCSDATA Entries84,76,26,71814,40,51,765BW System Entries82,79,01,049Significantly reducedThe table size shrank by over 80%, freeing up more than 1.4 TB of storage — all from addressing a single table’s accumulated stale data. Key Takeaways1. Monitor ARFCSDATA size proactively. Add ARFCSDATA (and ARFCSSTATE) to your regular table size monitoring. A sudden or sustained growth trend is an early warning sign of stuck tRFCs.2. Check SM58 regularly. Stuck entries in SM58 are not just a functional problem — they are a direct driver of database bloat. Investigate and resolve them as part of your regular Basis housekeeping.3. Schedule RSARFCER as a recurring job. This is the primary housekeeping tool for tRFC/qRFC data. It should be running on a schedule in every SAP system that uses RFC-based integrations — especially source systems feeding BW or other downstream platforms.4. Pay attention to BW extraction RFC destinations. High-frequency BW data extractions are a common source of tRFC accumulation, particularly if the BW system experiences downtime or connectivity issues during extraction windows.5. Investigate TID-level anomalies. If a single TID is generating millions of ARFCSDATA entries, something is fundamentally wrong with that LUW — either the extraction program is looping, or there is a data design issue creating an unusually high number of RFC calls within a single transaction.Final ThoughtARFCSDATA bloat is one of the more underappreciated causes of unexpected database growth in SAP systems. It doesn’t cause obvious application errors — it just quietly consumes space until it becomes a serious infrastructure problem. With the right monitoring and a simple scheduled job, it’s entirely preventable.If your ARFCSDATA table is growing unchecked, start with SM58 and RSARFCER — chances are, that’s where the story begins and ends. Read More Technology Blog Posts by Members articles
#SAP
#SAPTechnologyblog