Migration: Automatic variables
The Data Productivity Cloud supports most automatic variables and includes a similar concept to Matillion ETL called System variables. While both serve comparable purposes, they differ in syntax:
- Matillion ETL:
${my_variable_name}
. - Data Productivity Cloud:
${sysvar.object.property}
.
A clear pathway is established for migrating these variables and mapping them directly using the migration tool.
For more details of variable migration in general, read Migration: Variables.
Migration path
The table below lists all available automatic variables with their corresponding system variable mappings. To view the full list of system variables, read List of system variables. You will need to manually edit pipeline components to use the correct variables.
Matillion ETL automatic variable | Data Productivity Cloud system variable |
---|---|
component_name | .thisComponent.name |
component_message | .thisComponent.message |
environment_id | .environment.id |
environment_name | .environment.name |
job_name | .thisPipeline.fullName |
project_id | .project.id |
run_history_id | .rootPipeline.executionId |
task_id | .thisComponent.taskId |
version_name | .artifact.versionName |
Not supported
The following Matillion ETL automatic variables don't have equivalents in the Data Productivity cloud.
- detailed_error
- queued_time
- component_id
- job_id (see below)
- project_group_id
- project_group_name
- project_name
- version_id
- environment_catalog
- environment_endpoint
- environment_port
Not yet supported
The following Matillion ETL automatic variables don't currently have equivalents in the Data Productivity cloud, but support will be added in a future release. See our Roadmap for details.
- Environment properties such as:
- environment_username
- environment_database
- environment_default_schema
Accessing through scripts
The Data Productivity cloud doesn't support directly accessing automatic variables through the Python Script or Bash Script components.
If you require this functionality, you can use an Update Scalar component to write the values to user-defined variables, which can then be passed to the script.
For more alternative options available when migrating Python and Bash scripts, read Migration: Bash and Migration: Python.
job_id
In Matillion ETL, the job_id
variable is used to uniquely identify jobs, and therefore allow you to track them even if they're renamed.
In the Data Productivity Cloud, placing Git at the core of the architecture has meant that you can track jobs by knowing the artifact version and pipeline name, which are both available as system variables. The job_id
variable is therefore not supported in the Data Productivity Cloud because it's not needed for tracking jobs.
You can retrieve the artifact version using ${sysvar.artifact.versionName}
and the pipeline name using ${sysvar.thisPipeline.fullName}
. This combination provides a unique identifier for each job execution, similar to how job_id
is used in Matillion ETL.