The SAP HDI Deployer, known as @SAP/hdi-deploy, is a Node.js-based module designed for deploying SAP HANA database artifacts within HDI (HANA Deployment Infrastructure) environments. It facilitates the deployment of HDI-based persistence models and can be utilized in various contexts, including XS Advanced (XSA) and the SAP Business Technology Platform (BTP) or Cloud Foundry (CF). The HDI Deployer is typically integrated into a database module, where it is specified as a dependency in the package.json file, allowing seamless deployment of design-time artifacts to the respective HDI container.
RS[xxxxxx]:Failed to resume CDC replication. Error:com.sap.db.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: [345] (at 134): invalid table name: Could not find table/view xxxx_xxxTRIGGER_QUEUE in schema aaa_bbb: line 8 col 23 (at pos 198)
The user aaa_bbb(masked name) is the technical user name, used while configuring the Remote source. The table was a standard table generated by HANA framework. Despite all the possible attempts, our only recourse was the standard fix for all software issues: Recreate and Restart
The challenge with the recreation was the more than 200 remote subscriptions under the remote source. Our remote subscriptions were created using the .hdbreptask in our HDI project.
SAP HDI Deployer prevents the deployment of unmodified files due to a feature known as Delta deployment. This functionality ensures that only files that have changed since the last deployment are pushed to the HDI container, optimizing the deployment process and reducing unnecessary overhead.
Once again at the crossroads, we had two options: one to manually make some changes to more than 200 .hdbreptask and push the files as modified to Git repo. This would ensure that the pipeline deploys the modified files. The second option was a search in progress. We finally found the option hidden in package.json.
Deployer options through package.json
The option to force deploy files in the existing HDI project is :
–include-filter <list of files separated by single space> — treat-unmodified-as-modified
Each file will have a referenced path from the src folder.
ex: src/<folder>/<artifractname>.hdbreptask
Post edit, the package.json shall look as follows:
{
“name”: “deploy”,
“dependencies”: {
“@sap/hdi-deploy”: “^4.6”
},
“scripts”: {
“start”: “node node_modules/@sap/hdi-deploy/deploy.js –auto-undeploy –include-filter src/xxxxxxx/xxxxxxxxxxx.hdbreptask src/xxxxxxxxx/yyyyyyyyy.hdbreptask src/xxxxxxxxx/zzzzzzzzz.hdbreptask –treat-unmodified-as-modified”
}
}
Once the file is prepared, it should be connected to the cloud foundry space and deployed in your container defined through the Business Application Studio.
During the deployment, the console shall display the number of files pushed as modified.
Once the console confirms the response, the file can be pushed to merge and deployed through the pipeline.
Once the deployment is confirmed on the Cloud Transport management service, we will have to reset the package.json file. This step is to avoid the forced deployment of the files in the future.
The option is very useful when one needs to deploy due to scenarios where there is uan pdate in the components on the HANA Cloud server post upgrades ex: Hierarchy engine.
I found the option sleek, clean, and a bit geeky 🙂
Hope you found it useful.
Until next time – Chao.
The SAP HDI Deployer, known as @SAP/hdi-deploy, is a Node.js-based module designed for deploying SAP HANA database artifacts within HDI (HANA Deployment Infrastructure) environments. It facilitates the deployment of HDI-based persistence models and can be utilized in various contexts, including XS Advanced (XSA) and the SAP Business Technology Platform (BTP) or Cloud Foundry (CF). The HDI Deployer is typically integrated into a database module, where it is specified as a dependency in the package.json file, allowing seamless deployment of design-time artifacts to the respective HDI container. Post the usual upgrade of our server, we had a unique situation in the remote sources. Some of the remote sources were down and on attempting to start them resulting in the following error:RS[xxxxxx]:Failed to resume CDC replication. Error:com.sap.db.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: [345] (at 134): invalid table name: Could not find table/view xxxx_xxxTRIGGER_QUEUE in schema aaa_bbb: line 8 col 23 (at pos 198)The user aaa_bbb(masked name) is the technical user name, used while configuring the Remote source. The table was a standard table generated by HANA framework. Despite all the possible attempts, our only recourse was the standard fix for all software issues: Recreate and Restart The challenge with the recreation was the more than 200 remote subscriptions under the remote source. Our remote subscriptions were created using the .hdbreptask in our HDI project.SAP HDI Deployer prevents the deployment of unmodified files due to a feature known as Delta deployment. This functionality ensures that only files that have changed since the last deployment are pushed to the HDI container, optimizing the deployment process and reducing unnecessary overhead.Once again at the crossroads, we had two options: one to manually make some changes to more than 200 .hdbreptask and push the files as modified to Git repo. This would ensure that the pipeline deploys the modified files. The second option was a search in progress. We finally found the option hidden in package.json.Deployer options through package.jsonThe option to force deploy files in the existing HDI project is :–include-filter <list of files separated by single space> — treat-unmodified-as-modifiedEach file will have a referenced path from the src folder.ex: src/<folder>/<artifractname>.hdbreptaskPost edit, the package.json shall look as follows:{“name”: “deploy”,”dependencies”: {“@sap/hdi-deploy”: “^4.6″},”scripts”: {“start”: “node node_modules/@sap/hdi-deploy/deploy.js –auto-undeploy –include-filter src/xxxxxxx/xxxxxxxxxxx.hdbreptask src/xxxxxxxxx/yyyyyyyyy.hdbreptask src/xxxxxxxxx/zzzzzzzzz.hdbreptask –treat-unmodified-as-modified”}}Once the file is prepared, it should be connected to the cloud foundry space and deployed in your container defined through the Business Application Studio.During the deployment, the console shall display the number of files pushed as modified.Once the console confirms the response, the file can be pushed to merge and deployed through the pipeline.Once the deployment is confirmed on the Cloud Transport management service, we will have to reset the package.json file. This step is to avoid the forced deployment of the files in the future.The option is very useful when one needs to deploy due to scenarios where there is uan pdate in the components on the HANA Cloud server post upgrades ex: Hierarchy engine.I found the option sleek, clean, and a bit geeky 🙂Hope you found it useful.Until next time – Chao. Read More Technology Blogs by Members articles
#SAP
#SAPTechnologyblog