Anaplan Bulk
The Anaplan Bulk component uses the Anaplan API to retrieve bulk data (exports and views) to load into a table—this stages the data, so the table is reloaded each time. You can then use transformations to enrich and manage the data in permanent tables.
Warning
This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the Load Option Recreate Target Table to Off will prevent both recreation and truncation. Do not modify the target table structure manually.
Properties
Name
= string
A human-readable name for the component.
Authentication Method
= drop-down
Authenticate this component using either an Anaplan OAuth entry or valid Anaplan username/password credentials.
OAuth
= drop-down
A configured Anaplan OAuth entry. Read Anaplan Bulk authentication guide to learn how to create an OAuth entry.
Note
If you don't expect to run jobs with the Anaplan Bulk component regularly, it's recommended that you use a non-rotatable refresh token for your OAuth entries. Rotatable refresh tokens expire after 35 minutes.
Username
= string
Your Anaplan login username.
Password
= string
Your Anaplan login password. Use Manage Passwords to save your password as a named entry, or store the password directly in the component.
Warning
Login passwords have a 90-day lifespan in Anaplan. You must reset your password manually. If your login fails because your password has expired, OAuth clients will also fail. Once you reset your password, OAuth clients will resume normal functionality.
Data Source Type
= drop-down
Workspace
= drop-down
An Anaplan workspace. You can find your Workspace ID in the URL of a given model. See the portion of the URL that reads /workspaces/[workspace-id]/
where [workspace-id]
is an alphanumeric Workspace ID.
Model
= drop-down
An Anaplan model. You can find your Model ID in the URL of a given model. See the portion of the URL that reads /models/[models-id]/
where [model-id]
is an alphanumeric Model ID.
Module
= drop-down
An Anaplan module. A module represents a specific function within a model, such as margin calculation or profit and loss. This property is only available when using a view data source.
Export
= drop-down
Select an available export. Anaplan offers documentation on creating export actions here.
Note
It's recommended that you format your exports as "Tabular Single or Multi Column CSV" to allow exports to be stored in a cloud data warehouse as a standard table.
View
= drop-down
Select an available view. Anaplan offers documentation for creating custom views here.
Type
= drop-down
Choose between using a standard table or an external table.
External: The data will be put into an S3 bucket and referenced by an external table. Standard: The data will be staged on an S3 bucket before being loaded into a table. This is the default setting.
Warehouse
= drop-down
Select the Snowflake warehouse. The special value, [Environment Default], will use the warehouse defined in the Matillion ETL environment. For more information, read Virtual Warehouses.
Database
= drop-down
Select the Snowflake database. The special value, [Environment Default], will use the database defined in the Matillion ETL environment. For more information, read Databases, Tables, and Views.
Schema
= drop-down
Select the Snowflake schema. The special value, [Environment Default], will use the schema defined in the Matillion ETL environment. For more information, read Database, Schema, and Share DDL.
Target Table
= string
Provide a new table name.
Warning
This table will be recreated on each run of the job, and drop any existing table of the same name.
Stage
= drop-down
Select a managed stage. The special value, [Custom], will create a stage "on the fly" for the component's run. Selecting [Custom] provides all the properties typically seen in the Manage Stages dialog for your input. If you select a managed stage that has already been configured in Manage Stages, the additional properties are not provided, as they have already been configured.
Manage Stages can be found by clicking the Environments panel in the lower-left, then right-clicking an environment.
To learn more, read Manage Stages.
Stage Platform
= drop-down
Select a staging setting.
- Snowflake Managed: Allow Matillion ETL to create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
- Existing Amazon S3 Location: Activates the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
- Existing Azure Blob Storage Location: Activates the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
- Existing Google Cloud Storage Location: Activates the GCS Staging Area property, allowing users to specify a custom staging area within Google Cloud Storage.
Stage Authentication
= drop-down
Select an authentication method for data staging.
- Credentials: Uses the credentials configured in the Matillion ETL environment. If no credentials have been configured, an error will occur.
- Storage Integration: Use a Snowflake storage integration to authentication data staging. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of allowed or blocked storage locations. To learn more, read Create Storage Integration.
Storage Integration
= drop-down
Select a Snowflake storage integration from the dropdown list. Storage integrations are required to permit Snowflake to read data from and write to your cloud storage location (Amazon S3, Microsoft Azure, Google Cloud Storage) and must be set up in advance of selection. To learn more about setting up a storage integration for use in Matillion ETL, read Storage Integration Setup Guide.
This property is only available when Stage Authentication is set to Storage Integration.
S3 Staging Area
= drop-down
Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Storage Account
= drop-down
Select a storage account with your desired blob container to be used for staging the data. For more information, read Storage account overview.
Blob Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Staging Area
= GCS bucket path
The URL and path of the target Google Storage bucket to be used for staging the queried data. For more information, read Creating storage buckets
Encryption
= drop-down
Decide on how the files are encrypted inside the S3 Bucket. This property is available when using an Existing Amazon S3 Location for Staging.
- None: No encryption.
- SSE KMS: Encrypt the data according to a key stored on KMS.
- SSE S3: Encrypt the data according to a key stored on an S3 bucket.
KMS Key ID
= drop-down
The ID of the KMS encryption key you have chosen to use in the 'Encryption' property.
Load Options
= multiple drop-downs
- Clean Staged Files: Destroy staged files after loading data. Default is On.
- String Null is Null: Converts any strings equal to "null" into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off.
- Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On
- File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field.
- Trim String Columns: Remove leading and trailing characters from a string column. Default is On.
- Compression Type: Set the compression type to either gzip or None. The default is gzip.
- Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
New Table Name
= string
Specify the name of the new table to be created. This property is only available when Type is set to External.
Stage Database
= drop-down
Specify the stage database. The special value, [Environment Default], will use the database defined in the environment. This property is only available when Type is set to External.
Stage Schema
= drop-down
Specify the stage schema. The special value, [Environment Default], will use the schema defined in the environment. This property is only available when Type is set to External.
Stage
= drop-down
Select a stage. This property is only available when Type is set to External.
Name
= string
A human-readable name for the component.
Authentication Method
= drop-down
Authenticate this component using either an Anaplan OAuth entry or valid Anaplan username/password credentials.
OAuth
= drop-down
A configured Anaplan OAuth entry. Read Anaplan Bulk authentication guide to learn how to create an OAuth entry.
Note
If you don't expect to run jobs with the Anaplan Bulk component regularly, it's recommended that you use a non-rotatable refresh token for your OAuth entries. Rotatable refresh tokens expire after 35 minutes.
Username
= string
Your Anaplan login username.
Password
= string
Your Anaplan login password. Use Manage Passwords to save your password as a named entry, or store the password directly in the component.
Warning
Login passwords have a 90-day lifespan in Anaplan. You must reset your password manually. If your login fails because your password has expired, OAuth clients will also fail. Once you reset your password, OAuth clients will resume normal functionality.
Data Source Type
= drop-down
Workspace
= drop-down
An Anaplan workspace. You can find your Workspace ID in the URL of a given model. See the portion of the URL that reads /workspaces/[workspace-id]/
where [workspace-id]
is an alphanumeric Workspace ID.
Model
= drop-down
An Anaplan model. You can find your Model ID in the URL of a given model. See the portion of the URL that reads /models/[models-id]/
where [model-id]
is an alphanumeric Model ID.
Module
= drop-down
An Anaplan module. A module represents a specific function within a model, such as margin calculation or profit and loss. This property is only available when using a view data source.
Export
= drop-down
Select an available export. Anaplan offers documentation on creating export actions here.
Note
It's recommended that you format your exports as "Tabular Single or Multi Column CSV" to allow exports to be stored in a cloud data warehouse as a standard table.
View
= drop-down
Select an available view. Anaplan offers documentation for creating custom views here.
Catalog
= drop-down
Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Matillion ETL environment setup. Selecting a catalog will determine which databases are available in the next parameter.
Database
= drop-down
Select the Delta Lake database. The special value, [Environment Default], will use the database specified in the Matillion ETL environment setup.
Target Table
= string
The name of the table to be created.
Warning
This table will be recreated and will drop any existing table of the same name.
Stage Platform
= drop-down
Select a staging setting.
- AWS S3: Activates the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3.
- Azure Blob: Activates the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure.
- Personal Staging: Uses a Databricks personal staging location. Your Matillion ETL environment connection to Delta Lake on Databricks requires your username to be token and the corresponding password to be a masked entry for a Databricks access token (AWS). If you're on Azure, the token is already set. Read Authentication using Azure Databricks personal access tokens.
Additionally, read Configure Unity Catalog storage account for CORS to learn how to configure CORS to enable Databricks to manage personal staging locations in Unity Catalog (AWS). If you're using Azure, read Configure Unity Catalog storage account for CORS.
S3 Staging Area
= S3 bucket
Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Storage Account
= drop-down
Select a storage account with your desired blob container to be used for staging the data. For more information, read Storage account overview.
Blob Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
Encryption
= drop-down
Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
- None: No encryption.
- SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
- SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
KMS Key ID
= drop-down
The ID of the KMS encryption key you have chosen to use in the Encryption property.
Load Options
= multiple drop-downs
- Clean Staged Files: Destroy staged files after loading data. Default is On.
- String Null is Null: Converts any strings equal to null into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off.
- Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
- File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
- Compression Type: Set the compression type to either gzip (default) or None.
- Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Name
= string
A human-readable name for the component.
Authentication Method
= drop-down
Authenticate this component using either an Anaplan OAuth entry or valid Anaplan username/password credentials.
OAuth
= drop-down
A configured Anaplan OAuth entry. Read Anaplan Bulk authentication guide to learn how to create an OAuth entry.
Note
If you don't expect to run jobs with the Anaplan Bulk component regularly, it's recommended that you use a non-rotatable refresh token for your OAuth entries. Rotatable refresh tokens expire after 35 minutes.
Username
= string
Your Anaplan login username.
Password
= string
Your Anaplan login password. Use Manage Passwords to save your password as a named entry, or store the password directly in the component.
Warning
Login passwords have a 90-day lifespan in Anaplan. You must reset your password manually. If your login fails because your password has expired, OAuth clients will also fail. Once you reset your password, OAuth clients will resume normal functionality.
Data Source Type
= drop-down
Workspace
= drop-down
An Anaplan workspace. You can find your Workspace ID in the URL of a given model. See the portion of the URL that reads /workspaces/[workspace-id]/
where [workspace-id]
is an alphanumeric Workspace ID.
Model
= drop-down
An Anaplan model. You can find your Model ID in the URL of a given model. See the portion of the URL that reads /models/[models-id]/
where [model-id]
is an alphanumeric Model ID.
Module
= drop-down
An Anaplan module. A module represents a specific function within a model, such as margin calculation or profit and loss. This property is only available when using a view data source.
Export
= drop-down
Select an available export. Anaplan offers documentation on creating export actions here.
Note
It's recommended that you format your exports as "Tabular Single or Multi Column CSV" to allow exports to be stored in a cloud data warehouse as a standard table.
View
= drop-down
Select an available view. Anaplan offers documentation for creating custom views here.
Type
= drop-down
- External: The data will be put into your chosen S3 bucket and referenced by an external table.
- Standard: The data will be staged on your chosen S3 bucket before being loaded into a table. This is the default setting.
Schema
= drop-down
Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.
Note
An external schema is required if the Type property is set to External.
Target Table
= string
The name of the table to be created.
Warning
This table will be recreated and will drop any existing table of the same name.
Location
= S3 bucket
An S3 bucket path that will be used to store the data. Once the data is on an S3 bucket, it can be referenced by an external table. This property is only available when the Type property is set to External.
S3 Staging Area
= S3 bucket
Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Use Accelerated Endpoint
= boolean
When True, data will be loaded via the s3-accelerate
endpoint. Please consider the following information:
- Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Read Using the Amazon S3 Transfer Acceleration Speed Comparison tool to learn how you can compare accelerated and non-accelerated upload speeds across Amazon S3 Regions.
- Users must manually set the acceleration configuration of an existing bucket. To learn more, see PutBucketAccelerateConfiguration in the API Reference, available at the AWS documentation.
- This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
- Cases may arise where Matillion ETL cannot determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Matillion ETL will reveal this property for user input on a "just in case" basis. In these cases, Matillion ETL may return a validation message that reads "OK - Bucket could not be validated." You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission,
GetAccelerateConfiguration
) Matillion ETL will again show this property "just in case". - The default setting is False.
Distribution Style
= drop-down
- All: Copy rows to all nodes in the Redshift cluster.
- Auto: (Default) Allow Redshift to manage your distribution style.
- Even: Distribute rows around the Redshift cluster evenly.
- Key: Distribute rows around the Redshift cluster according to the value of a key column.
Note
Table distribution is critical to good performance. Read the Distribution styles documentation for more information.
Sort Key
= dual listbox
This is optional, and lets users specify one or more columns from the input that should be set as the table's sort key.
Note
Sort keys are critical to good performance. Read Working with sort keys for more information.
Sort Key Options
= drop-down
Decide whether the sort key is of a compound or interleaved variety.
Primary Keys
= dual listbox
Select one or more columns to be designated as the table's primary key.
Load Options
= multiple drop-downs
- Comp Update: Apply automatic compression to the target table. Default is On.
- Stat Update: Automatically update statistics when filling a table. Default is On. In this case, it is updating the statistics of the target table.
- Clean S3 Objects: Automatically remove UUID-based objects on the S3 bucket. Default is On. Effectively, users decide here whether to keep the staged data in the S3 bucket or not.
- String Null is Null: Converts any strings equal to "null" into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is On.
- Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
- File Prefix: Give staged file names a prefix of your choice. When this Load Option is selected, users should set their preferred prefix in the text field.
- Compression Type: Set the compression type to either gzip (default) or None.
- Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Encryption
= drop-down
Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
- None: No encryption.
- SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
- SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
KMS Key ID
= drop-down
The ID of the KMS encryption key you have chosen to use in the Encryption property.
Name
= string
A human-readable name for the component.
Authentication Method
= drop-down
Authenticate this component using either an Anaplan OAuth entry or valid Anaplan username/password credentials.
OAuth
= drop-down
A configured Anaplan OAuth entry. Read Anaplan Bulk authentication guide to learn how to create an OAuth entry.
Note
If you don't expect to run jobs with the Anaplan Bulk component regularly, it's recommended that you use a non-rotatable refresh token for your OAuth entries. Rotatable refresh tokens expire after 35 minutes.
Username
= string
Your Anaplan login username.
Password
= string
Your Anaplan login password. Use Manage Passwords to save your password as a named entry, or store the password directly in the component.
Warning
Login passwords have a 90-day lifespan in Anaplan. You must reset your password manually. If your login fails because your password has expired, OAuth clients will also fail. Once you reset your password, OAuth clients will resume normal functionality.
Data Source Type
= drop-down
Workspace
= drop-down
An Anaplan workspace. You can find your Workspace ID in the URL of a given model. See the portion of the URL that reads /workspaces/[workspace-id]/
where [workspace-id]
is an alphanumeric Workspace ID.
Model
= drop-down
An Anaplan model. You can find your Model ID in the URL of a given model. See the portion of the URL that reads /models/[models-id]/
where [model-id]
is an alphanumeric Model ID.
Module
= drop-down
An Anaplan module. A module represents a specific function within a model, such as margin calculation or profit and loss. This property is only available when using a view data source.
Export
= drop-down
Select an available export. Anaplan offers documentation on creating export actions here.
Note
It's recommended that you format your exports as "Tabular Single or Multi Column CSV" to allow exports to be stored in a cloud data warehouse as a standard table.
View
= drop-down
Select an available view. Anaplan offers documentation for creating custom views here.
Table Type
= drop-down
Select whether the table is Native (by default in BigQuery) or an external table.
Project
= drop-down
Select the Google Cloud project. The special value, [Environment Default], will use the project defined in the environment. For more information, read Creating and managing projects.
Dataset
= drop-down
Select the Google BigQuery dataset to load data into. The special value, [Environment Default], will use the dataset defined in the environment. For more information, read Introduction to datasets.
Target Table
= string
A name for the table. Only available when the table type is Native.
Warning
This table will be recreated and will drop any existing table of the same name.
New Target Table
= string
A name for the new external table. Only available when the table type is External.
Cloud Storage Staging Area
= Google Cloud Storage bucket
The URL and path of the target Google Cloud Storage bucket to be used for staging the queried data. Only available when the table type is Native.
Location
= Google Cloud Storage bucket
The URL and path of the target Google Cloud Storage bucket. Only available when the table type is External.
Load Options
= multiple drop-downs
- Clean Cloud Storage Files: Destroy staged files on Google Cloud Storage after loading data. Default is On.
- Cloud Storage File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field.
- Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On.
- Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Strategy
Connect to Anaplan and load an export. The results of the query are streamed into cloud storage and loaded into a target table.
Snowflake | Delta Lake on Databricks | Amazon Redshift | Google BigQuery | Azure Synapse Analytics |
---|---|---|---|---|
✅ | ✅ | ✅ | ✅ | ❌ |