HANA Database Migration: Handling Incompatible Changes to Existing Tables

 

HANA Database Migration: Handling Incompatible Changes to Existing Tables

Introduction

When developing applications on SAP HANA, one of the most challenging aspects is managing database schema changes over time, especially when dealing with production systems containing live data. How do you restructure your tables without losing existing data? How do you handle incompatible changes that might otherwise require dropping and recreating tables?

In this blog post, we’ll explore a powerful but sometimes overlooked feature in SAP HANA development: Migration Tables (.hdbmigrationtable). We’ll examine how they provide an elegant solution to a common problem, using a real-world use case from a coupon management application.

Understanding Migration Tables in SAP HANA

Migration Tables (.hdbmigrationtable) are a specialized artifact type in SAP HANA development designed specifically for handling data migration during application upgrades. They offer a controlled way to transform your database schema while preserving existing data.

These artifacts become particularly valuable when you need to:

Split tablesMerge tablesChange column types or namesAdd or remove primary keysConvert non-nullable columns to nullable (or vice versa)

In essence, they provide a mechanism to script complex migrations that would otherwise be impossible through standard deployment processes.

The Problem: Adding Draft Support to a Localized Entity

Let’s examine a real-world scenario that demonstrates why migration tables are necessary. In a coupon management application, we faced the following situation:

We had a Coupon entity with localization enabled for the description field via CAP CDS modelingThe solution was already in production with customers using it via OData endpointsWe needed to enhance the application by adding a Fiori UI with draft support

The challenge arose because:

The Coupon entity had a child entity texts that was auto-generated by CAP due to localizationAdding draft support via Fiori.draft.enabled annotation would work for standard entities but not for the auto-generated texts entityEnabling draft for the texts entity required structural changes to the table, including adding a new non-null primary key field ID_textsSimply deploying these changes would cause HDI container upgrades to fail when existing data was present

The Solution: Two-Step Deployment with Migration Tables

To solve this problem, we implemented a two-step deployment process using .hdbmigrationtable artifacts. Let’s break it down in detail:

Step 1: Preparation Deployment

The first deployment prepares the database for the upcoming structural changes:

1. We add the persistence journal annotation to the texts entity:

annotate Coupon.texts with @cds.persistence.journal;

2. This generates a migration table file /src/OPCS.Coupon_texts.hdbmigrationtable with the current schema definition:

== version=1
COLUMN TABLE OPCS_Coupon_texts (
locale NVARCHAR(14) NOT NULL,
ID NVARCHAR(36) NOT NULL,
description NVARCHAR(255),
PRIMARY KEY(locale, ID)
)

3. We add the generated table to the undeploy.json file to prevent conflicts:

{
“undeploy”: [
“src/gen/OPCS.Coupon_texts.hdbtable”
]
}

4. We deploy these changes, which effectively prepares the system for the migration but doesn’t change the actual table structure yet.

Step 2: Feature Deployment with Data Migration

The second deployment implements the actual feature with proper data migration:

1. We add the draft annotation to enable draft support:

annotate OPCS.Coupon with @fiori.draft.enabled;

2. The CDS compiler updates the migration table file, but we need to modify it to ensure a safe migration. Here’s the complete migration script:

== version=2
COLUMN TABLE OPCS_Coupon_texts (
ID_texts NVARCHAR(36) NOT NULL,
locale NVARCHAR(14),
ID NVARCHAR(36),
description NVARCHAR(255),
PRIMARY KEY(ID_texts)
)
== migration=2
— generated by cds-compiler version 4.9.8
ALTER TABLE OPCS_Coupon_texts ADD (ID_texts NVARCHAR(36) NULL);
UPDATE OPCS_Coupon_texts SET ID_texts = LOWER(
SUBSTRING(TO_VARCHAR(NEWUID()), 1, 8 ) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 9, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 13, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 17, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 21, 12)
) WHERE ID_texts IS NULL;
ALTER TABLE OPCS_Coupon_texts ALTER (locale NVARCHAR(14) NULL);
ALTER TABLE OPCS_Coupon_texts ALTER (ID NVARCHAR(36) NULL);
ALTER TABLE OPCS_Coupon_texts ALTER (ID_texts NVARCHAR(36) NOT NULL);
ALTER TABLE OPCS_Coupon_texts DROP PRIMARY KEY;
ALTER TABLE OPCS_Coupon_texts ADD PRIMARY KEY(ID_texts);

3. This script performs the following operations in sequence:

Adds the new ID_texts column as nullable initiallyPopulates it with properly formatted UUIDs using NEWUID() and string formattingMakes the previously required columns nullableMakes the new ID_texts column NOT NULL after it’s populatedSwitches the primary key from the old composite key to the new column

4. Deploying these changes successfully transforms the table structure while preserving all existing data.

Key Insights for Working with Migration Tables

Based on this experience, here are some best practices for handling incompatible changes to HANA tables:

1. Always Use a Staged Approach

For significant structural changes, consider breaking the migration into multiple deployments to reduce risk. This allows you to:

Prepare the system for changes without affecting functionalityValidate intermediate states before proceedingRoll back more easily if something goes wrong

2. Understand the Migration Script in Detail

Auto-generated migration scripts often need manual adjustments. Pay close attention to:

The order of operations (e.g., adding columns before making them NOT NULL)Data population steps (never make a column NOT NULL before filling it with data)Primary key changes (which often require particular care)

3. Format UUID Values Properly

When generating UUIDs for new primary keys, ensure they match your expected format. SAP HANA’s NEWUID() function returns UUIDs without hyphens, so you may need to format them like this:

LOWER(
SUBSTRING(TO_VARCHAR(NEWUID()), 1, 8 ) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 9, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 13, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 17, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 21, 12)
)

4. Test Thoroughly in Non-Production Environments

Always validate your migration scripts in development and test environments with representative data before deploying to production.

5. Include Migration Tables in Version Control

Make sure to commit your .hdbmigrationtable files to Git or your version control system. They are crucial artifacts for the deployment process.

Tip: When reviewing database migration changes, pay special attention to the sequence of operations. The order matters significantly, especially when changing constraints and populating new required fields.

Conclusion

Migration Tables in SAP HANA provide a powerful mechanism for evolving your database schema while maintaining data integrity. By using this approach, we were able to enhance our coupon management application with draft functionality without disrupting existing production data.

The two-step deployment process demonstrates how to handle even complex changes safely. While it requires more planning and understanding of the underlying database operations, it enables continuous evolution of applications without the risky “drop and recreate” approach that would lead to data loss.

For developers working with CAP and SAP HANA Cloud etc., mastering migration tables is an essential skill that will help you navigate the challenges of database schema evolution with confidence.

Further Reading

For more information on HANA migration tables and related concepts, check out:

SAP HANA Deployment Infrastructure (HDI) Reference for SAP HANA Platform: Migration Tables (.hdbmigrationtable)Enabling Draft for Localized DataEnabling hdbmigrationtable Generation for Selected Entities During CDS Build 

​  HANA Database Migration: Handling Incompatible Changes to Existing TablesIntroductionWhen developing applications on SAP HANA, one of the most challenging aspects is managing database schema changes over time, especially when dealing with production systems containing live data. How do you restructure your tables without losing existing data? How do you handle incompatible changes that might otherwise require dropping and recreating tables?In this blog post, we’ll explore a powerful but sometimes overlooked feature in SAP HANA development: Migration Tables (.hdbmigrationtable). We’ll examine how they provide an elegant solution to a common problem, using a real-world use case from a coupon management application.Understanding Migration Tables in SAP HANAMigration Tables (.hdbmigrationtable) are a specialized artifact type in SAP HANA development designed specifically for handling data migration during application upgrades. They offer a controlled way to transform your database schema while preserving existing data.These artifacts become particularly valuable when you need to:Split tablesMerge tablesChange column types or namesAdd or remove primary keysConvert non-nullable columns to nullable (or vice versa)In essence, they provide a mechanism to script complex migrations that would otherwise be impossible through standard deployment processes.The Problem: Adding Draft Support to a Localized EntityLet’s examine a real-world scenario that demonstrates why migration tables are necessary. In a coupon management application, we faced the following situation:We had a Coupon entity with localization enabled for the description field via CAP CDS modelingThe solution was already in production with customers using it via OData endpointsWe needed to enhance the application by adding a Fiori UI with draft supportThe challenge arose because:The Coupon entity had a child entity texts that was auto-generated by CAP due to localizationAdding draft support via Fiori.draft.enabled annotation would work for standard entities but not for the auto-generated texts entityEnabling draft for the texts entity required structural changes to the table, including adding a new non-null primary key field ID_textsSimply deploying these changes would cause HDI container upgrades to fail when existing data was presentThe Solution: Two-Step Deployment with Migration TablesTo solve this problem, we implemented a two-step deployment process using .hdbmigrationtable artifacts. Let’s break it down in detail:Step 1: Preparation DeploymentThe first deployment prepares the database for the upcoming structural changes:1. We add the persistence journal annotation to the texts entity:annotate Coupon.texts with @cds.persistence.journal;2. This generates a migration table file /src/OPCS.Coupon_texts.hdbmigrationtable with the current schema definition:== version=1
COLUMN TABLE OPCS_Coupon_texts (
locale NVARCHAR(14) NOT NULL,
ID NVARCHAR(36) NOT NULL,
description NVARCHAR(255),
PRIMARY KEY(locale, ID)
)3. We add the generated table to the undeploy.json file to prevent conflicts:{
“undeploy”: [
“src/gen/OPCS.Coupon_texts.hdbtable”
]
}4. We deploy these changes, which effectively prepares the system for the migration but doesn’t change the actual table structure yet.Step 2: Feature Deployment with Data MigrationThe second deployment implements the actual feature with proper data migration:1. We add the draft annotation to enable draft support:annotate OPCS.Coupon with @fiori.draft.enabled;2. The CDS compiler updates the migration table file, but we need to modify it to ensure a safe migration. Here’s the complete migration script:== version=2
COLUMN TABLE OPCS_Coupon_texts (
ID_texts NVARCHAR(36) NOT NULL,
locale NVARCHAR(14),
ID NVARCHAR(36),
description NVARCHAR(255),
PRIMARY KEY(ID_texts)
)
== migration=2
— generated by cds-compiler version 4.9.8
ALTER TABLE OPCS_Coupon_texts ADD (ID_texts NVARCHAR(36) NULL);
UPDATE OPCS_Coupon_texts SET ID_texts = LOWER(
SUBSTRING(TO_VARCHAR(NEWUID()), 1, 8 ) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 9, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 13, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 17, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 21, 12)
) WHERE ID_texts IS NULL;
ALTER TABLE OPCS_Coupon_texts ALTER (locale NVARCHAR(14) NULL);
ALTER TABLE OPCS_Coupon_texts ALTER (ID NVARCHAR(36) NULL);
ALTER TABLE OPCS_Coupon_texts ALTER (ID_texts NVARCHAR(36) NOT NULL);
ALTER TABLE OPCS_Coupon_texts DROP PRIMARY KEY;
ALTER TABLE OPCS_Coupon_texts ADD PRIMARY KEY(ID_texts);3. This script performs the following operations in sequence:Adds the new ID_texts column as nullable initiallyPopulates it with properly formatted UUIDs using NEWUID() and string formattingMakes the previously required columns nullableMakes the new ID_texts column NOT NULL after it’s populatedSwitches the primary key from the old composite key to the new column4. Deploying these changes successfully transforms the table structure while preserving all existing data.Key Insights for Working with Migration TablesBased on this experience, here are some best practices for handling incompatible changes to HANA tables:1. Always Use a Staged ApproachFor significant structural changes, consider breaking the migration into multiple deployments to reduce risk. This allows you to:Prepare the system for changes without affecting functionalityValidate intermediate states before proceedingRoll back more easily if something goes wrong2. Understand the Migration Script in DetailAuto-generated migration scripts often need manual adjustments. Pay close attention to:The order of operations (e.g., adding columns before making them NOT NULL)Data population steps (never make a column NOT NULL before filling it with data)Primary key changes (which often require particular care)3. Format UUID Values ProperlyWhen generating UUIDs for new primary keys, ensure they match your expected format. SAP HANA’s NEWUID() function returns UUIDs without hyphens, so you may need to format them like this:LOWER(
SUBSTRING(TO_VARCHAR(NEWUID()), 1, 8 ) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 9, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 13, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 17, 4) || ‘-‘ ||
SUBSTRING(TO_VARCHAR(NEWUID()), 21, 12)
)4. Test Thoroughly in Non-Production EnvironmentsAlways validate your migration scripts in development and test environments with representative data before deploying to production.5. Include Migration Tables in Version ControlMake sure to commit your .hdbmigrationtable files to Git or your version control system. They are crucial artifacts for the deployment process.Tip: When reviewing database migration changes, pay special attention to the sequence of operations. The order matters significantly, especially when changing constraints and populating new required fields.ConclusionMigration Tables in SAP HANA provide a powerful mechanism for evolving your database schema while maintaining data integrity. By using this approach, we were able to enhance our coupon management application with draft functionality without disrupting existing production data.The two-step deployment process demonstrates how to handle even complex changes safely. While it requires more planning and understanding of the underlying database operations, it enables continuous evolution of applications without the risky “drop and recreate” approach that would lead to data loss.For developers working with CAP and SAP HANA Cloud etc., mastering migration tables is an essential skill that will help you navigate the challenges of database schema evolution with confidence.Further ReadingFor more information on HANA migration tables and related concepts, check out:SAP HANA Deployment Infrastructure (HDI) Reference for SAP HANA Platform: Migration Tables (.hdbmigrationtable)Enabling Draft for Localized DataEnabling hdbmigrationtable Generation for Selected Entities During CDS Build   Read More Technology Blogs by SAP articles 

#SAP

#SAPTechnologyblog

You May Also Like

More From Author