January 2025 changelog
Here you'll find the January 2025 changelog for the Data Productivity Cloud. Just want to read about new features? Read our New Features blog.
23rd January
DesignerImprovements ๐ง
Designer UI:
- Updated the command palette to include the following action commands, all of which can be activated by typing
SHIFT
+>
:- Run Pipeline
- Validate Pipeline
- Canvas: Add component
- Canvas: Zoom in
- Canvas: Zoom out
- Canvas: Zoom to fit
To access the command palette, type CMD
+ k
or CTRL
+ k
or click the magnifying glass button in the upper-right of the UI. Type CMD
+ SHIFT
+ k
or CTRL
+ SHIFT
+ k
to open the action commands list directly.
22nd January
DesignerNew features ๐
DataOps:
- Artifacts can now be created to ensure that the version of a pipeline that you are releasing to production is the same version that you have tested in your environments. An artifact is an immutable collection of resources (such as pipelines and script files) that is deployed to your chosen environment when you publish. The following guides contain more information about artifacts and how they relate to schedules:
- The process of pushing and publishing local changes using Git push has been changed. For more details about how to push and publish your local changes to create a schedule, read Git push.
- For more information on how the Data Productivity Cloud can help you to take an innovative approach to DataOps, read DataOps in the Data Productivity Cloud.
Networks:
- Customers using Full SaaS Data Productivity cloud can use the Database Query and RDS Query components to access data sources within their infrastructure using a configured SSH tunnel. This is accessed via the new Networks tab. Read our Networks guide to get started.
20th January
DesignerNew features ๐
Designer:
- System variables are now available in public preview. System variables provide component execution metadata, such as row count, execution duration, and more.
- Post-processing is now available in public preview. Each orchestration pipeline component now includes a Post-processing tab, where you can update scalar variables and map your existing user-defined pipeline variables and project variables to system variables.
16th January
DesignerNew features ๐
Migration tool:
- The migration tool for any current Matillion ETL users who wish to migrate their data pipelines to the Data Productivity Cloud is now generally available. Migrating from Matillion ETL to the Data Productivity Cloud is a complex process, and we urge you to read all of the documentation listed here:
- Migration considerations focuses on what a migration will involve and the steps required before getting started.
- Migration feature parity details the current differences between Matillion ETL and the Data Productivity Cloud that you should consider.
- Migration process explains how to export your jobs from Matillion ETL and import them to the Data Productivity Cloud.
- Migration mappings explains how you can resolve issues when migrating shared jobs in Matillion ETL to the shared pipelines feature in the Data Productivity Cloud.
HubNew features ๐
- Data lineage filtering is now available for customers who want to enhance the clarity of data flows in their Data Productivity Cloud pipelines.
14th January
DesignerNew features ๐
Orchestration:
- The JDBC Table Metadata to Grid component is now available for Databricks projects.
HubNew features ๐
- The Super Admin role is now available in the Data Productivity Cloud for new and existing accounts. This new role gives users with the role access to everything in that accountโincluding all users, projects, environments, and more. For all new accounts, this is applied automatically. For more information, read Registration. For existing accounts, submit a support ticket with the account number and the user's email address to request the role assignment.
Data Loader CDCImprovements ๐ง
- Updated the CDC agent version from
2.101.2
to2.102.1
. - Improved logging around the compaction of a pipeline's schema history.
- Library updates and security fixes.
8th January
DesignerNew features ๐
Code Editor:
- Added a new feature, Code Editor, to Designer. Code Editor introduces an improved high code experience to the Data Productivity Cloud. Code Editor is powered by the Monaco Editor, which also powers Visual Studio Code and other editors across the web.
- Added the ability to create .sql and .py files in Designer.
- Added the ability to edit files in Code Editor.
- Improved the SQL Script and Python Pushdown components by adding a Script Location property. With this property, you can decide whether to run an SQL or Python script from directly within the component (current behaviour) or choose to run an existing .sql or .py file in your project instead. Any .sql and .py files in the repository you have connected to your project can be edited using Code Editor and run within SQL Script or Python Pushdown components.
Note
Python Pushdown is currently only available for Snowflake projects.