SAP ODP
The SAP ODP orchestration component enables you to connect directly through SAP ODP to access available data sources in SAPI and ABAP CDS views.
Note
This component is only available for use with Hybrid SaaS agents.
If the component requires access to a cloud provider, it will use the cloud credentials associated with your environment to access resources.
Uploading SAP drivers
Drivers for SAP ODP are not natively included in Hybrid SaaS agents, but can be uploaded to your agent instance using the process in Uploading external drivers to the agent.
Two driver files are required:
sapjco3.jar
libsapjco3.so
You can obtain these drivers as a single ZIP file from Download SAP Java Connector 3.1 SDK, selecting Linux for AArch64 compatible processors. Unzip the file and place the drivers in the storage location you specified as described in Uploading external drivers to the agent. Do not change the driver file names.
Properties
Reference material is provided below for the Connect, Configure, Destination, and Advanced Settings properties.
Connect
Authentication Type
= drop-down
Select the authentication method. Only Username/Password is supported currently.
Username
= string
Your SAP username.
Password
= drop-down
The secret definition holding your password tied to your username for your SAP account. Your password should be saved as a secret definition before using this component.
Server Connection
= drop-down
The connection method. Choose either Direct or Load Balancer. The following connection parameters will change depending on the connection method.
Encryption
= drop-down
Select either None or TLS. Only required if Server Connection is Direct. For TLS encryption, a Web Socket Host, Web Socket Port, and TLS/SSL Certificate are required. For no encryption, a Host and System Number are required.
Host
= string
SAP ABAP application server host DNS. This is the host name of your target system. Only required for Direct server connections with no encryption.
System Number
= integer
The system number of the SAP ABAP app server. Your system number should have been provided at the point of installation. For ECC, it is usually 10
. For S/4H, it is usually 00
. Only required for Direct server connections with no encryption.
Web Socket Host
= string
Enter the host name for the TLS encrypted WebSocket connection. Only required for Direct server connections with TLS encryption.
Web Socket Port
= string
Enter the port number used for the TLS encrypted WebSocket connection. Only required for Direct server connections with TLS encryption.
TLS/SSL Certificate
= string
The TLS encryption certificate. Paste the text content of the certificate into the TLS/SSL Certificate dialog. Only required for Direct server connections with TLS encryption.
Message Server Host
= string
Enter the DNS host name or IP address of the SAP message server. Only required for Load Balancer connections.
Message Server Service
= string
Enter a SAP message server port number. This is optional. To resolve service names sapmsXXX
, a lookup in etc/services
is performed by the network layer of the operating system. If you're using port numbers instead of symbolic service names, no look-ups will be performed and no additional entries are needed. Only required for Load Balancer connections.
Group Server
= string
If using a group of SAP application servers, use this property to identify the application servers used (the "logon group"). This parameter is optional and is only required if used on your SAP system. The default value is a single SPACE
. Only required for Load Balancer connections.
Client
= integer
Enter the SAP client number. The default is 100. From SAP help: "A SAP client is defined as a self-contained commercial, organizational, and technical unit within an SAP system. All business data within a client are protected from other clients." Read more about this at Customer Data and System Data.
Language
= string
Defaults to en
. If not defined, the SAP system returns to JCo the default user language, and JCo will use that default value. Language codes adhere to ISO 639-1 codes.
Connection Options
= column editor
Additional connection options for SAP ODP. Add a new row with +; remove a row with -; or click Add All. For each connection option, select the Parameter from the drop-down list and enter its Value as a string.
Configure
Context
= drop down
List of contexts available in the SAP system for the subscriber type SAP_BW
. The context acts as a "data provider" for the data source. In SAP, a data provider permits configuration of data for extraction when targeting a specific use.
Data Source Search Term
= string
Search string used to identify and reduce the quantity of data sources returned by SAP. This is optional. For example, enter TEXT
to filter the list of data sources to those containing the string "text". The maximum search string length is 30 characters.
Data Source
= drop-down
Select the SAP ODP data source. Set the Data Source Search Term property as a filter to limit the items in the list, if required.
In the context of SAP ODP, a data source is an "extractor". In SAP, an extractor is a data extraction through a specified context. A data source can be the result of aggregation of one or more tables or views in SAP. A semantic character is also provided. Additionally, a contextual description is given where a data source has been provided with description metadata in SAP.
The data sources available show a concatenation of:
- The data source technical name.
- A semantic (the SAP type of data source).
- The SAP data source description, in the language specified at login.
Semantics glossary:
H = Hierarchy F = Transaction Data/Facts P = Master Data/Attributes T = Texts V = View
Data Selection
= dual listbox
Columns to include in the extraction. Columns will also have a contextual description where applicable. This property is optional.
Data Source filter
= column editor
- Input Column: Select an input column. The available input columns vary depending upon the data source.
- Qualifier:
- Is: Compares the column to the value using the comparator.
- Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
- Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character [%] to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
- Value: The value to be compared.
This property is optonal.
Load Type
= drop-down
Select a load type to manage the quantity of data sent via the network. This is especially useful when dealing with large RFC table parameters. Available options depend on your chosen data source. Possible types are Full Load, Delta Load, and Recovery (available only if the data source supports deltas). The default is Full Load.
When Full Load is selected, the whole table is sent back to the caller. When Delta is selected, only appended, deleted, and updated table rows are transferred back to the caller.
Delta Initialisation
= string
Only used if Load Type is Delta. To only extract changes (creates, modifications, and deletes) from now onwards, enter X
. Leave blank (the default) to also return historical changes.
Recovery Pointer
= string
Only available if the data source supports deltas, and the Load Type is Recovery.
SAP ODP allows you to repeat the extraction of data that may have been lost or corrupted due to an interruption of service, to restore and guarantee the data integrity of the extracted data. A recovery pointer is set by SAP ODP at a specific point-in-time in the delta queue, and you can use this to recover any data change recorded from that point onward. When the recovery pointer is set and the job repeated, all the data changes that happened from that recovery pointer onward are extracted, while all data changes that happened before that recovery pointer are not extracted.
You must obtain the value of the recovery pointer from the Replication Pointers table in SAP. After a data extraction that you believe to have been interrupted, refresh the SAP Replication Pointers table and make a note of the pointer. The pointer is a string of digits in the following format: 20240131082113.000021000
. This is a year-month-day-hour-minute-second timestamp, and must be in this exact format; any format other than this will lead to an error. This string should be entered in the Recovery Pointer property to tell the component where in the delta queue to begin the recovery run.
Subscriber Name
= string
The subscriber is the consumer of the data. Subscriber name is assigned by the user to allow the identification of which app is consuming the data. The maximum string size is 32 characters.
Subscriber Process
= string
The identifier for the extraction process of the subscriber. The maximum string size is 64 characters.
Subscriber Run
= string
The run ID of the subscriber. For example, ${dt.now()}
The maximum string size is 64 characters. This property is optional.
Destination
Select your cloud data warehouse.
Destination
= drop-down
- Snowflake: Load your data into Snowflake. You'll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into your preferred cloud storage location.
Click either the Snowflake or Cloud Storage tab on this page for documentation applicable to that destination type.
Warehouse
= drop-down
The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
= drop-down
The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Table Name
= string
The name of the table to be created.
Load Strategy
= drop-down
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Stage Platform
= drop-down
Choose a data staging platform using the drop-down menu.
- Amazon S3: Stage your data on an AWS S3 bucket.
- Snowflake: Stage your data on a Snowflake internal stage.
- Azure Storage: Stage your data in an Azure Blob Storage container.
- Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.
Click one of the tabs below for documentation applicable to that staging platform.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Internal Stage Type
= drop-down
A Snowflake internal stage type. Currently, only type User is supported.
Read Choosing an Internal Stage for Local Files to learn more about internal stage types and the usage of each.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path of the written files.
File Prefix
= string (optional)
A string of characters to include at the beginning of the written files. Often used for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Destination
= drop-down
- Databricks: Load your data into Databricks. You'll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into your preferred cloud storage location.
Click either the Databricks or Cloud Storage tab on this page for documentation applicable to that destination type.
Catalog
= drop-down
Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which schema are available in the next parameter.
Schema
= drop-down
Select the Databricks schema. The special value, [Environment Default], will use the schema specified in the Data Productivity Cloud environment setup.
Table Name
= string
The name of the table to be created.
Load Strategy
= drop-down
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged Files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Stage Platform
= drop-down
Choose a data staging platform using the drop-down menu.
- Amazon S3: Stage your data on an AWS S3 bucket.
- Azure Storage: Stage your data in an Azure Blob Storage container.
Click one of the tabs below for documentation applicable to that staging platform.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path of the written files.
File Prefix
= string (optional)
A string of characters to include at the beginning of the written files. Often used for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Destination
= drop-down
- Redshift: Load your data into Amazon Redshift. You'll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into your preferred cloud storage location.
Click either the Amazon Redshift or Cloud Storage tab on this page for documentation applicable to that destination type.
Schema
= drop-down
Select the Redshift schema. The special value, [Environment Default], will use the schema defined in the environment. For information about using multiple schemas, read Schemas.
Table Name
= string
The name of the table to be created.
Load Strategy
= drop-down
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged Files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path of the written files.
File Prefix
= string (optional)
A string of characters to include at the beginning of the written files. Often used for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Advanced Settings
Max Package Size
= integer
Byte integer representing the package size of the data in SAP. The default setting is 5,000,000 bytes.
Max Package Size is based on the size of the package in SAP, measured in compressed bytes. This size may need to be tweaked depending on the content of the package and the size constraint of the cloud data platform.
Deactivate soft delete for Azure blobs (Databricks)
If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the "Enable soft delete for blobs" setting in your Azure account for your pipeline to run successfully. To do this:
- Log in to the Azure portal.
- In the top-left, click ☰ → Storage Accounts.
- Select the intended storage account.
- In the menu, under Data management, click Data protection.
- Untick Enable soft delete for blobs. For more information, read Soft delete for blobs.
Snowflake | Databricks | Amazon Redshift |
---|---|---|
✅ | ✅ | ✅ |