SAP Datasphere CLI & Python: Exporting Modeling Objects to CSV Files for Each Artifact

Estimated read time 9 min read

Introduction

In this blog post, we’ll explore how to use Python alongside SAP Datasphere CLI to extract modeling objects and export them to CSV files. The script allows users to handle artifacts such as remote tables, views, replication flows, and more, for each space in SAP Datasphere.
This solution is particularly useful for automating repetitive tasks and ensuring structured data handling across different modeling objects

Prerequisites

Steps to install SAP Datasphere CLI: 

https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/f7d5eddf20a34a1aa48d8e2c68a44e28.html/ 

https://community.sap.com/t5/technology-blogs-by-sap/sap-datasphere-external-access-overview-apis-cli-and-sql/bc-p/14086942#M180986/ 

Step-by-Step Process

Step 1: Prepare Login.Json file

Create OAuth Client with Purpose as Interactive Usage and Redirect URL as http://localhost:8080

Get the value of all below fields from the OAuth Client and prepare the Login.json file.

 

{
“client_id”: “”,
“client_secret”: “”,
“authorization_url”: “”,
“token_url”: “”,
“access_token”: “”,
“refresh_token”: “”
}

 

 Step 2: Create Model_Object.py file with below code

dsp host : give URL of Datasphere Tenant.

secrets_file : Give Path of Login.json file.

 

import subprocess
import pandas as pd
import sys

def manage_Modeling_Object(Modeling_Object):
# Step 1: Login to Datasphere using host and secrets file
dsp_host = ‘<URL of Datasphere>’
secrets_file = ‘<path>/Login.json’
command = f’datasphere login –host {dsp_host} –secrets-file {secrets_file}’
subprocess.run(command, shell=True) # Execute the login command

# Step 2: Retrieve a list of all spaces in JSON format
command = [‘datasphere’, ‘spaces’, ‘list’, ‘–json’]
result_spaces = subprocess.run(command, capture_output=True, shell=True, text=True) # Run the command and capture output

# Step 3: Parse the list of spaces from the command’s output
spaces = result_spaces.stdout.splitlines() # Split output into individual lines

ModelingObject_data = [] # Initialize a list to store Modeling Object data

# Step 4: Check if the Modeling Object is ‘spaces’
if Modeling_Object == ‘spaces’:
for space in spaces:
if space == “[” or space == “]”:
continue # Skip brackets in the JSON output
space_id = space.strip() # Extract space ID

# Add space details to the data list
ModelingObject_data.append({
‘Space ID’: space_id.replace(‘”‘, ”).replace(‘,’, ”),
‘Technical Name’: space_id.replace(‘”‘, ”).replace(‘,’, ”),
‘TYPE’: Modeling_Object[:-1].upper() # Set the TYPE as uppercase version of the input Modeling Object name
})

# Step 5: Process Modeling Objects for each space
else:
for space in spaces:
if space == “[” or space == “]”:
continue # Skip brackets in the JSON output
space_id = space.strip() # Extract space ID

# Step 6: Retrieve Modeling Objects for the current space
command = [‘datasphere’, ‘objects’, Modeling_Object, ‘list’, ‘–space’, space_id.replace(‘”‘, ”).replace(‘,’, ”)]
result_ModelingObject = subprocess.run(command, capture_output=True, shell=True, text=True) # Run the command

# Step 7: Parse the Modeling Object data from the output
ModelingObject_info = result_ModelingObject.stdout.splitlines() # Split output into individual lines
print(“Checking “+Modeling_Object.upper()+” for space : “+space_id.replace(‘”‘, ”).replace(‘,’, ”)) # Log the space being checked

# Step 8: Process each Modeling Object
if len(ModelingObject_info) > 1:
for flow in ModelingObject_info:
if ‘{‘ in flow or ‘}’ in flow or ‘[‘ in flow or ‘]’ in flow:
continue # Skip brackets or braces in the output
cleaned_flow = flow.replace(‘”technicalName”:’, ”).replace(‘”‘, ”).strip() # Clean up the output

# Step 9: Add Modeling Object details to the data list
ModelingObject_data.append({
‘Space ID’: space_id.replace(‘”‘, ”).replace(‘,’, ”),
‘Technical Name’: cleaned_flow,
‘TYPE’: Modeling_Object[:-1].upper() # Set the TYPE as uppercase version of the input Modeling Object name
})

# Step 10: Write the collected data into a CSV file
if ModelingObject_data:
df = pd.DataFrame(ModelingObject_data) # Create a DataFrame from the data list
df.to_csv(Modeling_Object.upper()+’.csv’, index=False) # Save the DataFrame to a CSV file without the index
print(“Space vise all “+Modeling_Object.upper()+” have been written to “+Modeling_Object.upper()+”.csv.”) # Log success message
else:
print(“No Modeling Objects found.”) # Log message if no data was collected

print(‘————————————————————————————————————————————‘) # Separator for readability

if __name__ == “__main__”:
# Check if an argument is provided via the command line
if len(sys.argv) > 1:
# Pass the first argument to the method
manage_Modeling_Object(sys.argv[1])
else:
print(“Please provide a Modeling Object name as an argument.”) # Log error message if no argument is provided

# Execute for predefined Modeling Objects
manage_Modeling_Object(‘remote-tables’)
manage_Modeling_Object(‘local-tables’)
manage_Modeling_Object(‘views’)
manage_Modeling_Object(‘intelligent-lookups’)
manage_Modeling_Object(‘data-flows’)
manage_Modeling_Object(‘replication-flows’)
manage_Modeling_Object(‘transformation-flows’)
manage_Modeling_Object(‘task-chains’)
manage_Modeling_Object(‘analytic-models’)
manage_Modeling_Object(‘data-access-controls’)

 

Step 3: Open command prompt and execute the Model_Objects.py file

Once the program execution is done it will generate CSV files for all the Datasphere artifactes mention in python code

each CSV file will have 3 columns : 

1) Space ID : Name of the space

2) Technical Name : Exact Technical Name of Object 

3) Type : Type of Object ( i.e view, local-table, remote-table, replication flw etc)

 

Conclusion

This script demonstrates how Python and SAP Datasphere CLI can collaborate to streamline artifact management and export data systematically. By following the steps provided, users can extend or adapt the code to suit their requirements.

 

​ IntroductionIn this blog post, we’ll explore how to use Python alongside SAP Datasphere CLI to extract modeling objects and export them to CSV files. The script allows users to handle artifacts such as remote tables, views, replication flows, and more, for each space in SAP Datasphere.This solution is particularly useful for automating repetitive tasks and ensuring structured data handling across different modeling objectsPrerequisitesSteps to install SAP Datasphere CLI: https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/f7d5eddf20a34a1aa48d8e2c68a44e28.html/ https://community.sap.com/t5/technology-blogs-by-sap/sap-datasphere-external-access-overview-apis-cli-and-sql/bc-p/14086942#M180986/ Step-by-Step ProcessStep 1: Prepare Login.Json fileCreate OAuth Client with Purpose as Interactive Usage and Redirect URL as http://localhost:8080Get the value of all below fields from the OAuth Client and prepare the Login.json file. {
“client_id”: “”,
“client_secret”: “”,
“authorization_url”: “”,
“token_url”: “”,
“access_token”: “”,
“refresh_token”: “”
}  Step 2: Create Model_Object.py file with below codedsp host : give URL of Datasphere Tenant.secrets_file : Give Path of Login.json file. import subprocess
import pandas as pd
import sys

def manage_Modeling_Object(Modeling_Object):
# Step 1: Login to Datasphere using host and secrets file
dsp_host = ‘<URL of Datasphere>’
secrets_file = ‘<path>/Login.json’
command = f’datasphere login –host {dsp_host} –secrets-file {secrets_file}’
subprocess.run(command, shell=True) # Execute the login command

# Step 2: Retrieve a list of all spaces in JSON format
command = [‘datasphere’, ‘spaces’, ‘list’, ‘–json’]
result_spaces = subprocess.run(command, capture_output=True, shell=True, text=True) # Run the command and capture output

# Step 3: Parse the list of spaces from the command’s output
spaces = result_spaces.stdout.splitlines() # Split output into individual lines

ModelingObject_data = [] # Initialize a list to store Modeling Object data

# Step 4: Check if the Modeling Object is ‘spaces’
if Modeling_Object == ‘spaces’:
for space in spaces:
if space == “[” or space == “]”:
continue # Skip brackets in the JSON output
space_id = space.strip() # Extract space ID

# Add space details to the data list
ModelingObject_data.append({
‘Space ID’: space_id.replace(‘”‘, ”).replace(‘,’, ”),
‘Technical Name’: space_id.replace(‘”‘, ”).replace(‘,’, ”),
‘TYPE’: Modeling_Object[:-1].upper() # Set the TYPE as uppercase version of the input Modeling Object name
})

# Step 5: Process Modeling Objects for each space
else:
for space in spaces:
if space == “[” or space == “]”:
continue # Skip brackets in the JSON output
space_id = space.strip() # Extract space ID

# Step 6: Retrieve Modeling Objects for the current space
command = [‘datasphere’, ‘objects’, Modeling_Object, ‘list’, ‘–space’, space_id.replace(‘”‘, ”).replace(‘,’, ”)]
result_ModelingObject = subprocess.run(command, capture_output=True, shell=True, text=True) # Run the command

# Step 7: Parse the Modeling Object data from the output
ModelingObject_info = result_ModelingObject.stdout.splitlines() # Split output into individual lines
print(“Checking “+Modeling_Object.upper()+” for space : “+space_id.replace(‘”‘, ”).replace(‘,’, ”)) # Log the space being checked

# Step 8: Process each Modeling Object
if len(ModelingObject_info) > 1:
for flow in ModelingObject_info:
if ‘{‘ in flow or ‘}’ in flow or ‘[‘ in flow or ‘]’ in flow:
continue # Skip brackets or braces in the output
cleaned_flow = flow.replace(‘”technicalName”:’, ”).replace(‘”‘, ”).strip() # Clean up the output

# Step 9: Add Modeling Object details to the data list
ModelingObject_data.append({
‘Space ID’: space_id.replace(‘”‘, ”).replace(‘,’, ”),
‘Technical Name’: cleaned_flow,
‘TYPE’: Modeling_Object[:-1].upper() # Set the TYPE as uppercase version of the input Modeling Object name
})

# Step 10: Write the collected data into a CSV file
if ModelingObject_data:
df = pd.DataFrame(ModelingObject_data) # Create a DataFrame from the data list
df.to_csv(Modeling_Object.upper()+’.csv’, index=False) # Save the DataFrame to a CSV file without the index
print(“Space vise all “+Modeling_Object.upper()+” have been written to “+Modeling_Object.upper()+”.csv.”) # Log success message
else:
print(“No Modeling Objects found.”) # Log message if no data was collected

print(‘————————————————————————————————————————————‘) # Separator for readability

if __name__ == “__main__”:
# Check if an argument is provided via the command line
if len(sys.argv) > 1:
# Pass the first argument to the method
manage_Modeling_Object(sys.argv[1])
else:
print(“Please provide a Modeling Object name as an argument.”) # Log error message if no argument is provided

# Execute for predefined Modeling Objects
manage_Modeling_Object(‘remote-tables’)
manage_Modeling_Object(‘local-tables’)
manage_Modeling_Object(‘views’)
manage_Modeling_Object(‘intelligent-lookups’)
manage_Modeling_Object(‘data-flows’)
manage_Modeling_Object(‘replication-flows’)
manage_Modeling_Object(‘transformation-flows’)
manage_Modeling_Object(‘task-chains’)
manage_Modeling_Object(‘analytic-models’)
manage_Modeling_Object(‘data-access-controls’) Step 3: Open command prompt and execute the Model_Objects.py fileOnce the program execution is done it will generate CSV files for all the Datasphere artifactes mention in python codeeach CSV file will have 3 columns : 1) Space ID : Name of the space2) Technical Name : Exact Technical Name of Object 3) Type : Type of Object ( i.e view, local-table, remote-table, replication flw etc) ConclusionThis script demonstrates how Python and SAP Datasphere CLI can collaborate to streamline artifact management and export data systematically. By following the steps provided, users can extend or adapt the code to suit their requirements.   Read More Technology Blogs by Members articles 

#SAP

#SAPTechnologyblog

You May Also Like

More From Author

84% of Humanity Has Never Used AI. Here’s Why That Number Should Scare You — and Excite You.The AI revolution isn’t here yet. It’s loading.Open your Twitter feed, your LinkedIn, your YouTube recommendations. AI is everywhere, right? Every headline screams that the world has changed. Investors are pouring billions into it. Pundits say it’ll replace half of all jobs by 2030.But zoom out. Like, really zoom out.Picture 8.1 billion dots. Each one is a human being. Now color-code them by how advanced their AI interaction actually is.Here’s what you’d see.A Sea of Gray84% of humanity — 6.8 billion people — have never used an AI tool. Not once.Not ChatGPT. Not Gemini. Not even a basic AI chatbot on a customer service page. Nothing.That’s not a rounding error. That’s almost every human being alive.We talk about AI like it’s oxygen — invisible and everywhere. But for most of the world, AI is still science fiction. It’s something they’ve heard about, maybe seen in a news segment, but never touched. They’re not resisting it. It simply hasn’t reached them yet.That gray is overwhelming. And it tells a story most people in the AI conversation completely miss.The Green Sliver at the Bottom16% — about 1.3 billion people — have used a free AI chatbot.This is the group that typed a question into ChatGPT once. Maybe used it to write a birthday message or settle a dinner table argument. They’ve dipped a toe in.This is what the mainstream media calls “the AI revolution.”A single conversation. A curiosity click. A “wow, that’s cool” moment before they closed the tab and never went back.That’s the ceiling for most people who’ve touched AI at all. And it’s still only 1 in 6 humans on the planet.The Tiny Yellow Band0.3% — somewhere between 15 and 60 million people — pay for AI.This is where it gets interesting. If you’re reading this on Medium, there’s a solid chance you’re in this group. You’ve got a ChatGPT Plus subscription, maybe Claude Pro, maybe Perplexity. You use AI regularly. You’ve seen what it can actually do when you push it.You feel like you’re part of something mainstream because everyone you know seems to be using it.They’re not. You are surrounded by a bubble of early adopters so thick that it feels like the whole world — but it’s 3 out of every 1,000 people on Earth.Let that sit for a second.The Nearly Invisible Red0.04% — roughly 2 to 5 million people — are using AI to build.Cursor. GitHub Copilot. AI coding scaffolds. Agentic workflows. These are people who aren’t just asking AI questions — they’re using it as a co-pilot to create things: software, products, companies, systems.This is AI at its highest current level of human interaction.And it’s so small on the chart that you almost can’t see it.What This Actually MeansHere’s the uncomfortable truth the tech industry doesn’t want to say too loudly:We are not in the AI age. We are in the “early adopters telling everyone it’s the AI age” age.The real wave hasn’t hit. Not even close.Think about the early internet. In 1995, a tiny fraction of people had ever visited a website. The people who got it early — who built skills, products, and audiences while everyone else shrugged — went on to define the next 30 years of business and culture.AI is at that exact moment right now.The gap between the 0.04% who build with AI and the 84% who’ve never touched it is not just a statistic. It’s the largest skills arbitrage opportunity most of us will ever see in our lifetimes.The Three Windows AheadHistory shows us there are three windows in any technological shift:The Builder Window — the 0.04% who create tools and systems with the technologyThe Learner Window — the people who deeply understand and fluently use it before the masses arriveThe Passenger Window — everyone else, who eventually adopts it because they have no choice, but on someone else’s termsRight now, windows 1 and 2 are still wide open.The people developing genuine AI fluency today — not “I asked ChatGPT for a recipe” fluency, but deep, workflow-integrated, problem-solving fluency — are building an advantage that will compound for years.Which Dot Are You?Look at that chart one more time.If you’re reading this article, you’re already ahead of 84% of the world just by paying attention. The question is what you do with the next 6 months.The gray dots aren’t your competition. They’re your future audience, your future customers, your future employees who will need someone to show them how this works.The red dots aren’t unreachable gods. They’re mostly just people who started earlier and stayed curious longer.The distance between where you are and where the opportunity is isn’t talent. It’s not even time.It’s intentionality.Which dot are you right now — and which one do you want to be by the end of the year? Drop it in the comments.Tags: Artificial Intelligence · Technology · Future Of Work · Productivity · ChatGPT

84% of Humanity Has Never Used AI. Here’s Why That Number Should Scare You — and Excite You.The AI revolution isn’t here yet. It’s loading.Open your Twitter feed, your LinkedIn, your YouTube recommendations. AI is everywhere, right? Every headline screams that the world has changed. Investors are pouring billions into it. Pundits say it’ll replace half of all jobs by 2030.But zoom out. Like, really zoom out.Picture 8.1 billion dots. Each one is a human being. Now color-code them by how advanced their AI interaction actually is.Here’s what you’d see.A Sea of Gray84% of humanity — 6.8 billion people — have never used an AI tool. Not once.Not ChatGPT. Not Gemini. Not even a basic AI chatbot on a customer service page. Nothing.That’s not a rounding error. That’s almost every human being alive.We talk about AI like it’s oxygen — invisible and everywhere. But for most of the world, AI is still science fiction. It’s something they’ve heard about, maybe seen in a news segment, but never touched. They’re not resisting it. It simply hasn’t reached them yet.That gray is overwhelming. And it tells a story most people in the AI conversation completely miss.The Green Sliver at the Bottom16% — about 1.3 billion people — have used a free AI chatbot.This is the group that typed a question into ChatGPT once. Maybe used it to write a birthday message or settle a dinner table argument. They’ve dipped a toe in.This is what the mainstream media calls “the AI revolution.”A single conversation. A curiosity click. A “wow, that’s cool” moment before they closed the tab and never went back.That’s the ceiling for most people who’ve touched AI at all. And it’s still only 1 in 6 humans on the planet.The Tiny Yellow Band0.3% — somewhere between 15 and 60 million people — pay for AI.This is where it gets interesting. If you’re reading this on Medium, there’s a solid chance you’re in this group. You’ve got a ChatGPT Plus subscription, maybe Claude Pro, maybe Perplexity. You use AI regularly. You’ve seen what it can actually do when you push it.You feel like you’re part of something mainstream because everyone you know seems to be using it.They’re not. You are surrounded by a bubble of early adopters so thick that it feels like the whole world — but it’s 3 out of every 1,000 people on Earth.Let that sit for a second.The Nearly Invisible Red0.04% — roughly 2 to 5 million people — are using AI to build.Cursor. GitHub Copilot. AI coding scaffolds. Agentic workflows. These are people who aren’t just asking AI questions — they’re using it as a co-pilot to create things: software, products, companies, systems.This is AI at its highest current level of human interaction.And it’s so small on the chart that you almost can’t see it.What This Actually MeansHere’s the uncomfortable truth the tech industry doesn’t want to say too loudly:We are not in the AI age. We are in the “early adopters telling everyone it’s the AI age” age.The real wave hasn’t hit. Not even close.Think about the early internet. In 1995, a tiny fraction of people had ever visited a website. The people who got it early — who built skills, products, and audiences while everyone else shrugged — went on to define the next 30 years of business and culture.AI is at that exact moment right now.The gap between the 0.04% who build with AI and the 84% who’ve never touched it is not just a statistic. It’s the largest skills arbitrage opportunity most of us will ever see in our lifetimes.The Three Windows AheadHistory shows us there are three windows in any technological shift:The Builder Window — the 0.04% who create tools and systems with the technologyThe Learner Window — the people who deeply understand and fluently use it before the masses arriveThe Passenger Window — everyone else, who eventually adopts it because they have no choice, but on someone else’s termsRight now, windows 1 and 2 are still wide open.The people developing genuine AI fluency today — not “I asked ChatGPT for a recipe” fluency, but deep, workflow-integrated, problem-solving fluency — are building an advantage that will compound for years.Which Dot Are You?Look at that chart one more time.If you’re reading this article, you’re already ahead of 84% of the world just by paying attention. The question is what you do with the next 6 months.The gray dots aren’t your competition. They’re your future audience, your future customers, your future employees who will need someone to show them how this works.The red dots aren’t unreachable gods. They’re mostly just people who started earlier and stayed curious longer.The distance between where you are and where the opportunity is isn’t talent. It’s not even time.It’s intentionality.Which dot are you right now — and which one do you want to be by the end of the year? Drop it in the comments.Tags: Artificial Intelligence · Technology · Future Of Work · Productivity · ChatGPT