Workday
The Workday component uses the Workday API to retrieve and store data—such as employee, financial, and business-related data—from Workday, to be either referenced by an external table or loaded into a table, depending on your cloud data warehouse. You can then use transformation components to enrich and manage the data in permanent tables.
To extract data via the Workday Reporting as a Service API, use the Workday Custom Reports component instead.
Using this component may return structured data that requires flattening. For help with flattening such data, we recommend using the Extract Nested Data component.
If the component requires access to a cloud provider, it will use the cloud credentials associated with your environment to access resources.
To stage data to Azure Blob Storage, the Azure credentials associated with your environment must be assigned the Storage Blob Data Contributor
role. For more information, read User assigned with the Storage Blob Data Contributor role.
Properties
Reference material is provided below for the Connect, Configure, and Destination properties.
Connect
Host
= string
Your Workday host name. Read Workday authentication guide to learn how to acquire this credential.
Tenant
= string
Your Workday Tenant ID. Read Workday authentication guide to learn how to acquire this credential.
Authentication Type
= drop-down
The authentication method to authorize access to your Workday data. Choose OAuth 2.0 Authorization Code to use an OAuth connection, or Username & password to use a username and password.
Authentication
= drop-down
(OAuth 2.0 Authorization Code only)
Choose your profile from the drop-down menu.
Click Manage to navigate to the OAuth tab to review OAuth connections and to add new connections. Read OAuth to learn how to create an OAuth connection.
Additionally, read Workday authentication guide, which explains how to create an OAuth connection for Workday.
Username
= string
(Username & password only) Your Workday username.
Password
= drop-down
(Username & password only)
Choose the secret definition that represents your credentials for this connector. Your credentials should be saved as a secret definition before using this component.
Click Manage to navigate to the Secret definitions tab to review your secret definitions and to add new secret definitions. Read Secret definitions to learn how to create a secret definition.
Configure
Version
= string
The version of Workday Web Services directory you want to use. The default is v41.0
, but you can replace this with any valid version. Designer supports any Workday Web Services version listed here.
Web Service
= drop-down
Select the Workday Web Service you want to query. The drop-down list will include all services available to the selected Version.
Operation
= drop-down
Select the operation you want to perform on the selected Workday Web Service. The drop-down list will include all operations available to the selected Web Service.
Object Filter
= column editor
Set filter settings for extracting data from Workday.
- Object Name - Id Type: Select an object from the drop-down menu.
- ID: Specify the value of the object.
- Descriptor: Provide a description of the object.
Data Selection
= dual listbox
Select columns to be extracted to load.
Exclude from Organizations
= dual listbox
Exclude fine-grained organization object detail from the extract and load operation.
Destination
Select your cloud data warehouse.
Destination
= drop-down
Select the destination for your data. This is either in Snowflake as a table or as files in cloud storage.
- Snowflake: Load your data into a table in Snowflake. The data must first be staged via Snowflake or a cloud storage solution.
- Cloud Storage: Load your data directly into files in your preferred cloud storage location. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.
Click either the Snowflake or Cloud Storage tab on this page for documentation applicable to that destination type.
Warehouse
= drop-down
The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
= drop-down
The Snowflake database to access. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Table Name
= string
The name of the table to be created in your Snowflake database. You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.
Load Strategy
= drop-down
Define what happens if the table name already exists in the specified Snowflake database and schema.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Stage Access Strategy
= drop-down (optional)
Select the stage access strategy. The strategies available depend on the cloud platform you select in Stage Platform.
- Credentials: Connects to the external stage (AWS, Azure) using your configured cloud provider credentials. Not available for Google Cloud Storage.
- Storage Integration: Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the Storage Integration property, through which you can select any of your existing Snowflake storage integrations.
Stage Platform
= drop-down
Use the drop-down menu to choose where the data is staged before being loaded into your Snowflake table.
- Amazon S3: Stage your data on an AWS S3 bucket.
- Snowflake: Stage your data on a Snowflake internal stage.
- Azure Storage: Stage your data in an Azure Blob Storage container.
- Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.
Click one of the tabs below for documentation applicable to that staging platform.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Internal Stage Type
= drop-down
Select the Snowflake internal stage type. Use the Snowflake links provided to learn more about each type of stage.
- User: Each Snowflake user has a user stage allocated to them by default for file storage. You may find the user stage convenient if your files will only be accessed by a single user, but need to be copied into multiple tables.
- Named: A named stage provides high flexibility for data loading. Users with the appropriate privileges on the stage can load data into any table. Furthermore, because the stage is a database object, any security or access rules that apply to all objects will apply to the named stage.
Named stages can be altered and dropped. User stages cannot.
Named Stage
= drop-down
Select your named stage. Read Creating a named stage to learn how to create a new named stage.
Warning
There is a known issue where named stages that include special characters or spaces are not supported.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
Stage Access Strategy
= drop-down (optional)
Select the stage access strategy. The strategies available depend on the cloud platform you select in Stage Platform.
- Storage Integration: Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the Storage Integration property, through which you can select any of your existing Snowflake storage integrations.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path for the files to be written to. Note that this path follows, but does not include, the bucket or container name.
File Prefix
= string (optional)
A string of characters that precedes the name of the written files. This can be useful for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into files for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Destination
= drop-down
Select the destination for your data. This is either in Databricks as a table or as files in cloud storage. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.
- Databricks: Load your data into Databricks. You'll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into files in your preferred cloud storage location.
Click either the Databricks or Cloud Storage tab on this page for documentation applicable to that destination type.
Catalog
= drop-down
Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which schema are available in the next parameter.
Schema
= drop-down
Select the Databricks schema. The special value, [Environment Default], will use the schema specified in the Data Productivity Cloud environment setup.
Table Name
= string
The name of the table to be created in your Databricks schema. You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.
Load Strategy
= drop-down
Define what happens if the table name already exists in the specified Databricks schema.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged Files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Stage Platform
= drop-down
Use the drop-down menu to choose where the data is staged before being loaded into your Databricks table.
- Amazon S3: Stage your data on an AWS S3 bucket.
- Azure Storage: Stage your data in an Azure Blob Storage container.
Click one of the tabs below for documentation applicable to that staging platform.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path for the files to be written to. Note that this path follows, but does not include, the bucket or container name.
File Prefix
= string (optional)
A string of characters that precedes the name of the written files. This can be useful for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Destination
= drop-down
Select the destination for your data. This is either in Amazon Redshift as a table or as files in cloud storage. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.
- Redshift: Load your data into Amazon Redshift. You'll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into files in your preferred cloud storage location.
Click either the Amazon Redshift or Cloud Storage tab on this page for documentation applicable to that destination type.
Schema
= drop-down
Select the Amazon Redshift schema that will contain your table. The special value, [Environment Default], will use the schema defined in the environment. For information about using multiple schemas, read Schemas.
Table Name
= string
The name of the table to be created in your Amazon Redshift database. You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.
Load Strategy
= drop-down
Define what happens if the table name already exists in the specified Amazon Redshift database and schema.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged Files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into before it is loaded into your Amazon Redshift table. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path for the files to be written to. Note that this path follows, but does not include, the bucket or container name.
File Prefix
= string (optional)
A string of characters to include at the beginning of the written files. Often used for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Deactivate soft delete for Azure blobs (Databricks)
If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the "Enable soft delete for blobs" setting in your Azure account for your pipeline to run successfully. To do this:
- Log in to the Azure portal.
- In the top-left, click ☰ → Storage Accounts.
- Select the intended storage account.
- In the menu, under Data management, click Data protection.
- Untick Enable soft delete for blobs. For more information, read Soft delete for blobs.
Snowflake | Databricks | Amazon Redshift |
---|---|---|
✅ | ✅ | ✅ |