SAP Analytics Cloud (SAC) offers powerful capabilities for planning, analytics, and business intelligence—all in one unified platform. But to truly unlock its potential, it’s essential to follow best practices that ensure optimal performance, scalability, and maintainability. In this blog, we’ll explore some of the insights and recommendations across four key pillars of SAC:
Analytics & Planning Modeling: Structuring your models for flexibility, performance, and future growth.Calculations: Designing efficient formulas and leveraging SAC’s calculation engine wisely.Data Actions: Automating planning workflows with precision and minimal overhead.Story Building: Crafting intuitive, responsive, and insightful dashboards that drive decision-making.
Performance Considerations
Performance in SAC isn’t a post-implementation concern—it must be a core priority from the very beginning of your project. Striking the right balance between rich functionality and responsive performance is essential. While functional requirements drive business value, they must be matched by a system that performs reliably and efficiently. Performance optimization touches every layer of your SAC solution: from backend data architecture and model design to story layout, user concurrency, and even device and network configurations.
Model Design
Model Scope: Clearly define the model’s scope, type (planning or analytics), required metrics, and data granularity based on business objectives. This clarity prevents overengineering and ensures alignment with user needs.Reusability: Build modular models that can be reused across multiple stories and planning scenarios. Reusability reduces duplication, simplifies maintenance, and accelerates development cycles.Lean Model Structure: Key performance drivers include the number of measures, dimensions, hierarchies, and members per dimension. Reducing unnecessary dimensions and avoiding overly complex hierarchies directly improves query performance and system responsiveness. Focus on dimensions that are essential for data collection. Attributes such as geographic area or gender can often be derived rather than modeled as standalone dimensions. Evaluate whether static dimensions can be converted into attributes, and consider splitting large models into smaller, purpose-driven ones. Simpler models are easier to maintain, faster to calculate, and more memory efficient. Avoid adding dimensions or calculations unless they deliver clear business value.Understanding Model Types: From a performance perspective, Analytic Models connected to live data sources place minimal demand on SAC infrastructure, as most of the processing load is handled by the source system itself. In contrast, Analytic Models using acquired data rely on the embedded HANA database, primarily for read operations. While this does introduce some system load, it tends to be moderate and more CPU-intensive than memory-intensive. Planning Models with acquired data, however, offer richer functionality but come with higher resource requirements—impacting both CPU and memory. Their performance is heavily influenced by model design factors such as the number of exception aggregations, the complexity and volume of dimensions, and the breadth of the user’s security context during data entry operations.Additional performance-critical factors include the number of concurrent users, the volume and structure of the data, and the frequency and complexity of scheduled jobs or data actions also play a significant role in determining system responsiveness and scalability.Use Measure Based Model: When designing models in SAC, opting for a Measure-Based Model is highly recommended due to its significant advantages in performance, calculation flexibility, and overall modeling efficiency.Enhanced Performance & Scalability: Measure-based models are optimized for faster processing and reduced complexity, especially in large-scale planning scenarios.Flexible Structure: These models support both accounts and measures, with the account dimension now optional giving you greater design freedom.Reusable Calculations: You can define base measure calculations that simplify logic reuse across stories and planning workflows.Advanced Aggregation: Measures can be aggregated more efficiently, improving responsiveness and reducing load times.Expanded Data Type Support: Measure-based models accommodate a wider range of data types, enhancing versatility.Multi-Currency Planning: With base currency measures, currency conversion becomes seamless, enabling planning across multiple currencies with ease.
Dimension Design
Leverage Public Dimensions: Use the Public Dimension feature to manage master data independently, allowing for centralized updates and reuse across models. Avoid importing all source system tables—focus only on dimensions relevant to data entry and planning.Dimension hierarchies and members: Deep hierarchies can be resource intensive. Limit the number of levels to reduce complexity and improve query performance. Keep the number of members in each dimension—including the Account dimension—lean. Import only valid members and apply filters to exclude irrelevant data.Date Dimension: Use SAC’s standard date dimension with SAP-managed hierarchies (Year–Quarter–Month–Day) to minimize performance impact.
Planning Area
The planning area defines the specific slice of data within a model that end users can interact with—whether through stories or data actions. When users plan on a public version in a story, SAC automatically creates an implicit private version behind the scenes. This version isn’t visible to the user but introduces system overhead, especially in models with large datasets or when multiple users are editing simultaneously. To mitigate this, it’s crucial to explicitly define the planning area in the model settings and enable it within the story. Doing so significantly reduces resource consumption during public version edits. Key recommendations include:Configure a Targeted Planning Area: Set a focused scope based on business needs, using data access control, data locking, or both to restrict editable data.Avoid Full Dataset Scope: Limiting the planning area prevents unnecessary data loading and improves performance.Use Data Access Control(DAC): Define which users can write to specific dimensions—such as company codes or profit centers—to ensure secure and relevant access.Apply Data Locking: Protect actuals from being overwritten by locking them from edits.Enable Dynamic Planning Area: Set “Auto-generate based on table context” as the default. This allows the planning area to adjust automatically based on filters, story context, or table layout—ensuring users only load what they need.
SAC Planning Best Practices
Use Unbooked Data selectively: Enable unbooked data only when necessary—typically for planning use cases. Since it triggers resource-intensive backend queries, it’s best disabled for reporting scenarios to preserve performance.Implement DAC: Always activate DAC on relevant dimensions to enforce row-level security. This ensures users only see and interact with data within their authorized scope, reducing system load during public version edits. DAC should be configured during the model design phase.Complement DAC with Data Locks: DAC can be turned ON in the Model Dimension through the Read/Write Property or the Data Access Filter in Role by turning the Model Data Privacy ON so DAC can be combined with role-based restrictions. Use DAC primarily on planning-relevant dimensions (E.g. Cost Center, Profit Center, Region), not on all dimensions.Complement with Data Locks – While DAC governs who can read/write data, Data Locks control when data can be modified (e.g., freezing approved budgets). This combination enhances performance, especially in multi-user environments. Minimize driving dimensions when using data locking to reduce overhead.Define Validation Rules: Ensure users can only edit valid combinations of dimension members. This is particularly important for models with large dimension tables and helps maintain data integrity while improving performance.Limit Private versions: Excessive private versions—especially when multiple users are editing simultaneously—can strain system resources. Regularly clean up unused private versions to maintain system efficiency.Manage Date Granularity: For models with day-level granularity, consider isolating them into separate models. Use cross-model copy data actions to transfer aggregated data, reducing complexity and improving performance.Use Modeler option ‘Optimize Story Building Performance’. This prevents the automatic refresh of data during story design. When importing or writing back datasets, use filters or pre-aggregation to reduce data volume
Calculations/Data Actions
Prioritize Model-Level Formulas: Define calculations within the modeler rather than at the story level to reduce query load and improve responsiveness.Offload Complex: Logic to Data Actions Use data actions for advanced formulas and heavy computations instead of embedding them directly in stories.Restrict Execution Scope: Use the MEMBERSET statement to narrow the scope of data actions. A tightly defined scope minimizes processing time and resource usage.Limit Iteration Range: Keep calculation loops (IF, FOREACH) as focused as possible. Broad iteration across dimensions can degrade performance.Aggregate at the Highest Logical Level: Design calculations to occur at the most aggregated node feasible to reduce data volume and complexity.Avoid Copying Calculated Accounts: Copying calculated accounts or non-standard aggregations is resource intensive. Prefer copying raw fact data when possible.Configure Advanced Formulas Thoughtfully: All steps in advanced formulas execute sequentially and materialize in the model. Use variable members to streamline logic and avoid unnecessary commits.Minimize Query Calls: Reduce the number of DATA() and RESULTLOOKUP() statements—each call triggers a backend query.Restricted Measures Over Exception Aggregations: Exception aggregations (e.g., MIN, MAX, AVERAGE across hierarchies) are computationally expensive. Use restricted measures and advanced formulas when possible.Simplify Formula Structures: Avoid breaking formulas into too many sub-steps, which increases processing overhead.Recompile After Model Changes: If performance drops after model updates, re-save the advanced formula or data action to trigger recompilation and optimization.Centralize Execution for Large Volumes: Run data actions centrally via an admin user when handling large datasets. For decentralized execution, ensure scope is tightly defined.Trigger Actions Judiciously: Avoid frequent or unnecessary data action runs. Execute only when required to conserve resources.Monitor Execution Time: Use the Data Action Monitor to track duration and identify slow-running actions.Use Job Monitor for Diagnostics: Regularly review run history and step-level details to pinpoint bottlenecks and optimize performance.Simplify IF Statements: Reduce reliance on IF statements by using logical operators like AND or applying filters. Each IF creates a subquery, so use sparingly.Leverage Predictive Planning: Use predictive features to generate accurate forecasts based on historical data, accelerating planning cycles and improving decision-making.
Backend HANA/BW considerations
HANA modelling
Optimize HANA Views by following guidelines for modelling and performance tuning.Perform Calculations Post-Aggregation: Whenever possible, structure views to calculate after aggregation to reduce data volume and improve speed.Join on Indexed Columns: Use keys or indexed columns for joins instead of calculated fields to enhance query performance.Push Logic to HANA: Keep calculations and logic within HANA to leverage pushdown capabilities and minimize SAC-side processing.Blend in HANA, Not SAC: Perform data linking and blending operations in HANA to reduce complexity and improve responsiveness.Apply Filters Early: Early filtering reduces data load and improves query execution time.Hide Unused Dimensions in SAC: Remove unnecessary metadata from SAC models to streamline performance and reduce clutter.
BW modelling
Design Efficient Queries: Limit the number of dimensions and hide unnecessary key figures to reduce query payload and improve responsiveness.Use BW Variables Over SAC Filters: BW variables are processed server-side and offer better performance than applying filters within SAC stories.Define Restricted Key Figures: in BW Avoid creating restricted key figures in SAC remote models. Instead, configure them directly in BW for optimized execution.Minimize Cascading Page Filters: Page-level filters that trigger cascading effects across widgets can slow down performance. Use them sparingly and strategically.Enable Query Merge: Activate query merge to consolidate backend calls, reducing latency and improving load times.Enable HTTP/2 Protocol: Improves data transfer efficiency between SAC and BW, especially for large datasets.Increase Parallel Sessions: Configure BW data sources to support more parallel sessions, enhancing concurrency and user experience.Apply BI Authorizations for Data Security: Use BW’s built-in data-level security to control access and ensure users only retrieve data relevant to their roles.
Story Building Recommendations
Story Structure & Layout
Use Optimized or Unified Story Experience: Adopt the latest story building interface for enhanced performance, flexibility, and design capabilities.Leverage Composites as Reusable Widgets: Composites are stored as SAC artifacts and can be reused across stories. They support both canvas and responsive layouts and allow story-level features like filters and bookmarks to apply seamlessly. Benefits include faster story creation, reduced maintenance, standardized design, and support for parallel development.Use Responsive Layouts: Responsive pages automatically adjust to screen size and load faster. Unified stories offer advanced responsive rule configurations for precise control.
Content Design & Performance Optimization
Request Only What’s Needed: Break down information into manageable layers. Use summary or landing pages with hyperlinks to detailed views.Minimize Page Count and Widget Load: Reduce the number of pages and limit data-heavy widgets per page to improve load times and user experience.Optimize Charts and Tables:Avoid charts with more than 500 data points.Limit the number of charts per page to reduce backend requests.Split large tables into smaller ones with fewer KPIs or measures.Limit the number of cells and descriptive columns for readability.Avoid excessive formatting rules and in-cell charts/calculations.Use Lightweight Visuals: Replace large image files with lightweight SVGs to reduce rendering time.
Filtering & Data Handling:
Implement Filters Strategically: Apply filters at the document level rather than creating generic views. Use page filters instead of individual chart filters. Avoid loading all data and filtering afterward.Limit Hierarchy Depth: in Filters Deep hierarchies can slow down performance—use only necessary levels.Minimize Data Blending: Sources Keep the number of blended data sources low to reduce complexity and improve responsiveness.Pause Data Refresh During Design: Use the “Pause Data Refresh” option to optimize story loading while building.
Security & Access Control
Test with High-Access Roles: Validate story performance using roles with broad authorizations—not just the admin role—to simulate real-world usage.Apply Data Access Control (DAC): Ensure users only see data relevant to their role. For example, use filters for company code and profit center rather than displaying all dimensions.
Planning-Specific Tips
Use Fluid Data Entry Mode: This default mode offers better performance and interactivity. Avoid Single Data Entry mode, which is slower.Enable Planning Only Where Needed: Activate planning features on tables only when required. Disable planning in the builder panel if not used.
Latest Performance Improvements
New Table Build Experience: Optimized Stories – Edit Mode Enhancements.Blended Tables: Initial page loading is faster.Calculations: Automatically retained when tables are copied.Improved Filtering & Sorting: More efficient workflows and reduced query errors.Ghost Table: Instant layout preview before data loadsStory Design Optimization:Performance recommendations in Edit Mode available for story designers in the optimized story experience.Available for widget, page, and stories levels.Preload pagesIt can be enabled in edit mode under Data Refresh->Loading Optimization Settings-> Preload Pages.Preloading will initialize parts of the next and previous page, so switching to these pages will now be faster.
If performance issues persist despite optimization efforts, SAC offers a suite of diagnostic tools to help pinpoint the root cause. These tools provide visibility into query execution, data action performance, and story load behavior. For detailed guidance on using these diagnostics, refer to the official SAP Help documentation or reach out to SAP Support for expert help.
SAP Analytics Cloud (SAC) offers powerful capabilities for planning, analytics, and business intelligence—all in one unified platform. But to truly unlock its potential, it’s essential to follow best practices that ensure optimal performance, scalability, and maintainability. In this blog, we’ll explore some of the insights and recommendations across four key pillars of SAC:Analytics & Planning Modeling: Structuring your models for flexibility, performance, and future growth.Calculations: Designing efficient formulas and leveraging SAC’s calculation engine wisely.Data Actions: Automating planning workflows with precision and minimal overhead.Story Building: Crafting intuitive, responsive, and insightful dashboards that drive decision-making.Performance ConsiderationsPerformance in SAC isn’t a post-implementation concern—it must be a core priority from the very beginning of your project. Striking the right balance between rich functionality and responsive performance is essential. While functional requirements drive business value, they must be matched by a system that performs reliably and efficiently. Performance optimization touches every layer of your SAC solution: from backend data architecture and model design to story layout, user concurrency, and even device and network configurations.Model DesignModel Scope: Clearly define the model’s scope, type (planning or analytics), required metrics, and data granularity based on business objectives. This clarity prevents overengineering and ensures alignment with user needs.Reusability: Build modular models that can be reused across multiple stories and planning scenarios. Reusability reduces duplication, simplifies maintenance, and accelerates development cycles.Lean Model Structure: Key performance drivers include the number of measures, dimensions, hierarchies, and members per dimension. Reducing unnecessary dimensions and avoiding overly complex hierarchies directly improves query performance and system responsiveness. Focus on dimensions that are essential for data collection. Attributes such as geographic area or gender can often be derived rather than modeled as standalone dimensions. Evaluate whether static dimensions can be converted into attributes, and consider splitting large models into smaller, purpose-driven ones. Simpler models are easier to maintain, faster to calculate, and more memory efficient. Avoid adding dimensions or calculations unless they deliver clear business value.Understanding Model Types: From a performance perspective, Analytic Models connected to live data sources place minimal demand on SAC infrastructure, as most of the processing load is handled by the source system itself. In contrast, Analytic Models using acquired data rely on the embedded HANA database, primarily for read operations. While this does introduce some system load, it tends to be moderate and more CPU-intensive than memory-intensive. Planning Models with acquired data, however, offer richer functionality but come with higher resource requirements—impacting both CPU and memory. Their performance is heavily influenced by model design factors such as the number of exception aggregations, the complexity and volume of dimensions, and the breadth of the user’s security context during data entry operations.Additional performance-critical factors include the number of concurrent users, the volume and structure of the data, and the frequency and complexity of scheduled jobs or data actions also play a significant role in determining system responsiveness and scalability.Use Measure Based Model: When designing models in SAC, opting for a Measure-Based Model is highly recommended due to its significant advantages in performance, calculation flexibility, and overall modeling efficiency.Enhanced Performance & Scalability: Measure-based models are optimized for faster processing and reduced complexity, especially in large-scale planning scenarios.Flexible Structure: These models support both accounts and measures, with the account dimension now optional giving you greater design freedom.Reusable Calculations: You can define base measure calculations that simplify logic reuse across stories and planning workflows.Advanced Aggregation: Measures can be aggregated more efficiently, improving responsiveness and reducing load times.Expanded Data Type Support: Measure-based models accommodate a wider range of data types, enhancing versatility.Multi-Currency Planning: With base currency measures, currency conversion becomes seamless, enabling planning across multiple currencies with ease.Dimension DesignLeverage Public Dimensions: Use the Public Dimension feature to manage master data independently, allowing for centralized updates and reuse across models. Avoid importing all source system tables—focus only on dimensions relevant to data entry and planning.Dimension hierarchies and members: Deep hierarchies can be resource intensive. Limit the number of levels to reduce complexity and improve query performance. Keep the number of members in each dimension—including the Account dimension—lean. Import only valid members and apply filters to exclude irrelevant data.Date Dimension: Use SAC’s standard date dimension with SAP-managed hierarchies (Year–Quarter–Month–Day) to minimize performance impact.Planning AreaThe planning area defines the specific slice of data within a model that end users can interact with—whether through stories or data actions. When users plan on a public version in a story, SAC automatically creates an implicit private version behind the scenes. This version isn’t visible to the user but introduces system overhead, especially in models with large datasets or when multiple users are editing simultaneously. To mitigate this, it’s crucial to explicitly define the planning area in the model settings and enable it within the story. Doing so significantly reduces resource consumption during public version edits. Key recommendations include:Configure a Targeted Planning Area: Set a focused scope based on business needs, using data access control, data locking, or both to restrict editable data.Avoid Full Dataset Scope: Limiting the planning area prevents unnecessary data loading and improves performance.Use Data Access Control(DAC): Define which users can write to specific dimensions—such as company codes or profit centers—to ensure secure and relevant access.Apply Data Locking: Protect actuals from being overwritten by locking them from edits.Enable Dynamic Planning Area: Set “Auto-generate based on table context” as the default. This allows the planning area to adjust automatically based on filters, story context, or table layout—ensuring users only load what they need.SAC Planning Best Practices Use Unbooked Data selectively: Enable unbooked data only when necessary—typically for planning use cases. Since it triggers resource-intensive backend queries, it’s best disabled for reporting scenarios to preserve performance.Implement DAC: Always activate DAC on relevant dimensions to enforce row-level security. This ensures users only see and interact with data within their authorized scope, reducing system load during public version edits. DAC should be configured during the model design phase.Complement DAC with Data Locks: DAC can be turned ON in the Model Dimension through the Read/Write Property or the Data Access Filter in Role by turning the Model Data Privacy ON so DAC can be combined with role-based restrictions. Use DAC primarily on planning-relevant dimensions (E.g. Cost Center, Profit Center, Region), not on all dimensions.Complement with Data Locks – While DAC governs who can read/write data, Data Locks control when data can be modified (e.g., freezing approved budgets). This combination enhances performance, especially in multi-user environments. Minimize driving dimensions when using data locking to reduce overhead.Define Validation Rules: Ensure users can only edit valid combinations of dimension members. This is particularly important for models with large dimension tables and helps maintain data integrity while improving performance.Limit Private versions: Excessive private versions—especially when multiple users are editing simultaneously—can strain system resources. Regularly clean up unused private versions to maintain system efficiency.Manage Date Granularity: For models with day-level granularity, consider isolating them into separate models. Use cross-model copy data actions to transfer aggregated data, reducing complexity and improving performance.Use Modeler option ‘Optimize Story Building Performance’. This prevents the automatic refresh of data during story design. When importing or writing back datasets, use filters or pre-aggregation to reduce data volumeCalculations/Data ActionsPrioritize Model-Level Formulas: Define calculations within the modeler rather than at the story level to reduce query load and improve responsiveness.Offload Complex: Logic to Data Actions Use data actions for advanced formulas and heavy computations instead of embedding them directly in stories.Restrict Execution Scope: Use the MEMBERSET statement to narrow the scope of data actions. A tightly defined scope minimizes processing time and resource usage.Limit Iteration Range: Keep calculation loops (IF, FOREACH) as focused as possible. Broad iteration across dimensions can degrade performance.Aggregate at the Highest Logical Level: Design calculations to occur at the most aggregated node feasible to reduce data volume and complexity.Avoid Copying Calculated Accounts: Copying calculated accounts or non-standard aggregations is resource intensive. Prefer copying raw fact data when possible.Configure Advanced Formulas Thoughtfully: All steps in advanced formulas execute sequentially and materialize in the model. Use variable members to streamline logic and avoid unnecessary commits.Minimize Query Calls: Reduce the number of DATA() and RESULTLOOKUP() statements—each call triggers a backend query.Restricted Measures Over Exception Aggregations: Exception aggregations (e.g., MIN, MAX, AVERAGE across hierarchies) are computationally expensive. Use restricted measures and advanced formulas when possible.Simplify Formula Structures: Avoid breaking formulas into too many sub-steps, which increases processing overhead.Recompile After Model Changes: If performance drops after model updates, re-save the advanced formula or data action to trigger recompilation and optimization.Centralize Execution for Large Volumes: Run data actions centrally via an admin user when handling large datasets. For decentralized execution, ensure scope is tightly defined.Trigger Actions Judiciously: Avoid frequent or unnecessary data action runs. Execute only when required to conserve resources.Monitor Execution Time: Use the Data Action Monitor to track duration and identify slow-running actions.Use Job Monitor for Diagnostics: Regularly review run history and step-level details to pinpoint bottlenecks and optimize performance.Simplify IF Statements: Reduce reliance on IF statements by using logical operators like AND or applying filters. Each IF creates a subquery, so use sparingly.Leverage Predictive Planning: Use predictive features to generate accurate forecasts based on historical data, accelerating planning cycles and improving decision-making.Backend HANA/BW considerationsHANA modellingOptimize HANA Views by following guidelines for modelling and performance tuning.Perform Calculations Post-Aggregation: Whenever possible, structure views to calculate after aggregation to reduce data volume and improve speed.Join on Indexed Columns: Use keys or indexed columns for joins instead of calculated fields to enhance query performance.Push Logic to HANA: Keep calculations and logic within HANA to leverage pushdown capabilities and minimize SAC-side processing.Blend in HANA, Not SAC: Perform data linking and blending operations in HANA to reduce complexity and improve responsiveness.Apply Filters Early: Early filtering reduces data load and improves query execution time.Hide Unused Dimensions in SAC: Remove unnecessary metadata from SAC models to streamline performance and reduce clutter.BW modellingDesign Efficient Queries: Limit the number of dimensions and hide unnecessary key figures to reduce query payload and improve responsiveness.Use BW Variables Over SAC Filters: BW variables are processed server-side and offer better performance than applying filters within SAC stories.Define Restricted Key Figures: in BW Avoid creating restricted key figures in SAC remote models. Instead, configure them directly in BW for optimized execution.Minimize Cascading Page Filters: Page-level filters that trigger cascading effects across widgets can slow down performance. Use them sparingly and strategically.Enable Query Merge: Activate query merge to consolidate backend calls, reducing latency and improving load times.Enable HTTP/2 Protocol: Improves data transfer efficiency between SAC and BW, especially for large datasets.Increase Parallel Sessions: Configure BW data sources to support more parallel sessions, enhancing concurrency and user experience.Apply BI Authorizations for Data Security: Use BW’s built-in data-level security to control access and ensure users only retrieve data relevant to their roles.Story Building RecommendationsStory Structure & LayoutUse Optimized or Unified Story Experience: Adopt the latest story building interface for enhanced performance, flexibility, and design capabilities.Leverage Composites as Reusable Widgets: Composites are stored as SAC artifacts and can be reused across stories. They support both canvas and responsive layouts and allow story-level features like filters and bookmarks to apply seamlessly. Benefits include faster story creation, reduced maintenance, standardized design, and support for parallel development.Use Responsive Layouts: Responsive pages automatically adjust to screen size and load faster. Unified stories offer advanced responsive rule configurations for precise control.Content Design & Performance OptimizationRequest Only What’s Needed: Break down information into manageable layers. Use summary or landing pages with hyperlinks to detailed views.Minimize Page Count and Widget Load: Reduce the number of pages and limit data-heavy widgets per page to improve load times and user experience.Optimize Charts and Tables:Avoid charts with more than 500 data points.Limit the number of charts per page to reduce backend requests.Split large tables into smaller ones with fewer KPIs or measures.Limit the number of cells and descriptive columns for readability.Avoid excessive formatting rules and in-cell charts/calculations.Use Lightweight Visuals: Replace large image files with lightweight SVGs to reduce rendering time.Filtering & Data Handling:Implement Filters Strategically: Apply filters at the document level rather than creating generic views. Use page filters instead of individual chart filters. Avoid loading all data and filtering afterward.Limit Hierarchy Depth: in Filters Deep hierarchies can slow down performance—use only necessary levels.Minimize Data Blending: Sources Keep the number of blended data sources low to reduce complexity and improve responsiveness.Pause Data Refresh During Design: Use the “Pause Data Refresh” option to optimize story loading while building.Security & Access ControlTest with High-Access Roles: Validate story performance using roles with broad authorizations—not just the admin role—to simulate real-world usage.Apply Data Access Control (DAC): Ensure users only see data relevant to their role. For example, use filters for company code and profit center rather than displaying all dimensions.Planning-Specific TipsUse Fluid Data Entry Mode: This default mode offers better performance and interactivity. Avoid Single Data Entry mode, which is slower.Enable Planning Only Where Needed: Activate planning features on tables only when required. Disable planning in the builder panel if not used.Latest Performance ImprovementsNew Table Build Experience: Optimized Stories – Edit Mode Enhancements.Blended Tables: Initial page loading is faster.Calculations: Automatically retained when tables are copied.Improved Filtering & Sorting: More efficient workflows and reduced query errors.Ghost Table: Instant layout preview before data loadsStory Design Optimization:Performance recommendations in Edit Mode available for story designers in the optimized story experience.Available for widget, page, and stories levels.Preload pagesIt can be enabled in edit mode under Data Refresh->Loading Optimization Settings-> Preload Pages.Preloading will initialize parts of the next and previous page, so switching to these pages will now be faster.If performance issues persist despite optimization efforts, SAC offers a suite of diagnostic tools to help pinpoint the root cause. These tools provide visibility into query execution, data action performance, and story load behavior. For detailed guidance on using these diagnostics, refer to the official SAP Help documentation or reach out to SAP Support for expert help. Read More Technology Blog Posts by SAP articles
#SAP
#SAPTechnologyblog