Skip to content

2025 changelogπŸ”—

Here you'll find the 2025 changelog for the Data Productivity Cloud. Just want to read about new features? Read our New Features blog.


DecemberπŸ”—

December 18πŸ”—

DesignerNew features πŸŽ‰


December 17πŸ”—

DesignerNew features πŸŽ‰

  • You can now create unlimited API credentials and assign each credential a specific role at the account level. This update enables safer integrations and a cleaner separation of duties for your automation. For more information, read Account roles for API credentials.

Components:

  • Added a new ORC Load component for Snowflake that loads data directly into a Snowflake table from one or more ORC files stored in Snowflake Managed Storage, Amazon S3, Google Cloud Storage, or Azure Blob Storage. For more information, read ORC Load.
  • Added a new AVRO Load component for Snowflake that loads data directly into a Snowflake table from one or more AVRO files stored in Snowflake Managed Storage, Amazon S3, Google Cloud Storage, or Azure Blob Storage. For more information, read AVRO Load.
  • Added a new Parquet Load component for Snowflake that loads data directly into a Snowflake table from one or more Parquet files stored in Snowflake Managed Storage, Amazon S3, Google Cloud Storage, or Azure Blob Storage. For more information, read Parquet Load.

December 9πŸ”—

DesignerNew features πŸŽ‰

  • You can now quickly locate any file and folder in your project using the new search bar in the Files Panel.

Components:

  • Added a new Show to Grid component for Snowflake, which allows you to save the results of a Snowflake SHOW command into a grid variable for further processing within your orchestration pipeline.

BillingNew features πŸŽ‰

  • A new Same as billing checkbox is now available in the Shipping Address section of the credit card payment method, allowing you to automatically populate shipping details using your billing address.

December 8πŸ”—

DesignerNew features πŸŽ‰

  • You can now set default session parameters for Snowflake connections when you create an environment. Session parameters can be used to modify the behavior of your Snowflake connection.

December 4πŸ”—

StreamingImprovements πŸ”§

  • Fixed a bug to ensure that a pipeline failure email will be sent if a pipeline fails to restart when scheduled, and the pipeline is opted in to receive error emails.

December 2πŸ”—

DesignerNew features πŸŽ‰

Connectors:

  • Added a new Snowflake Load connector, which enables you to fetch data from a Snowflake database. This new connector offers full and incremental load options, allowing you to only fetch new and updated rows of data.

NovemberπŸ”—

November 27πŸ”—

DesignerNew features πŸŽ‰

Matillion MCP server:

  • The Matillion MCP server is now publicly available. The MCP server offers a secure, standardized way for Large Language Models (LLMs) to interact with your Data Productivity Cloud resources. By configuring the MCP server, you can enable AI assistants to help with tasks such as monitoring pipeline executions, analyzing credit consumption, and triggering pipeline runs

Maia:

  • You can now rename and delete previous conversations with Maia to make it easier to find useful previous chats, or clear conversations that you no longer need.

November 26πŸ”—

DesignerNew features and endpoints πŸŽ‰

Agents:

  • You can now restrict which agents are available to a specific project or environment, and ensure that only specific environments are allowed to use agents. For more information, read Restricting agents.

API:

Agents

Method Endpoint Description
DELETE /v1/agents/{agentId}/allowlist Remove project from agent allow list.
GET /v1/agents/{agentId}/allowlist Get the agent's allow list.
PATCH /v1/agents/{agentId}/allowlist Add projects and/or environments to an agent's allow list.
PUT /v1/agents/{agentId}/allowlist Set an agent's allow list.

November 21πŸ”—

DesignerNew features πŸŽ‰

  • You can now use slash commands when interacting with Maia. These commands simplify complex prompts into single-word commands that instruct Maia to document, explain, fix, optimize and annotate files in your project.

November 14πŸ”—

DesignerNew features πŸŽ‰

Maia:

  • Maia now offers plan mode, which allows you to chat to Maia about your pipelines without making any changes to their configuration. In plan mode, Maia will suggest a multi-step plan to achieve your goal, but will not make any changes until you approve the plan.
  • Maia can now move, copy and rename files in your project. For example, you can now tell Maia to "Make a copy of my 'customer_data' pipeline, rename it to 'customers', and move it into my 'customer_data' folder".
  • Maia now supports grid variables. As a result, it can create grid variables with defined columns and default values, build pipelines that use grid variables, and help to configure components such as Grid Iterator and Append to Grid.

November 13πŸ”—

AgentsNew features πŸŽ‰

DesignerNew features πŸŽ‰

Connectors:

  • Added a new JSON Load component for Snowflake that loads data directly into a Snowflake table from one or more JSON files stored in Snowflake Managed Storage, Amazon S3, Google Cloud Storage, or Azure Blob Storage. For more information, read JSON Load.

November 10πŸ”—

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.113.69 to 2.113.75.
  • Improve error messages shown in the pipeline event log if the Streaming pipeline stops with an error.
  • Library updates and security fixes.

November 7πŸ”—

DesignerNew features πŸŽ‰

Connectors:


November 6πŸ”—

DesignerNew features πŸŽ‰

Maia:

We have added two new Maia features. You can now ask Maia to:

  • Visualize a sample of your data directly in the chat interface, so you can spot trends at a glance. To use this feature, ask Maia to create a chart visualizing the data in a component in your pipeline. For more details, read Visualizing data.
  • Search files in your project and provide information about the content of your files. For example, you can ask Maia to tell you how you have configured specific components, or which pipelines contain data from a specific table. For more information, read File exploration.
  • When creating a custom connector with Maia, you can now upload a PDF, JSON, or YAML file, including Postman collections and OpenAPI specifications.

Pipeline quality review:

  • Added a new Review button to the Designer canvas. The Review feature lets you check whether your pipelines meet a defined set of rules, so you can be sure that your pipelines align with your organization's standards. For more information, read Reviewing pipeline quality.

OctoberπŸ”—

October 30πŸ”—

DesignerNew features πŸŽ‰

Connectors:


October 28πŸ”—

DesignerNew features πŸŽ‰

Connectors:

  • Added a new CSV Load component for Snowflake that loads data directly into a Snowflake table from one or more CSV files stored in Snowflake Managed Storage, Amazon S3, Google Cloud Storage, or Azure Blob Storage. For more information, read CSV Load.

October 23πŸ”—

DesignerNew features πŸŽ‰

Connectors:

AgentsImprovements πŸ”§

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.113.23 to 2.113.69.
  • Library updates and security fixes.

October 22πŸ”—

DesignerImprovements πŸ”§

  • Maia now saves your recent conversations, making it easy to continue where you left off or revisit previous work. To resume a previous conversation with Maia, either click the conversation in the Branch conversations section of the Maia interface, or click Chat history in the top right to view a list of recent conversations. For more information, read Sessions and tools in Maia.

October 21πŸ”—

DesignerNew features πŸŽ‰

Connectors:

  • Added a new NetSuite SuiteAnalytics Load connector, which offers full and incremental load options. This component is currently only available for Snowflake projects.

October 16πŸ”—

DesignerImprovements πŸ”§

  • Added the ability to include descriptions with pipelines, to improve collaboration and clarity when sharing pipelines. You can now add a description to your pipelines during creation, and at the point of sharing. Descriptions can be a maximum of 1000 characters.
  • We have recently addressed an inconsistency where nested variables (i.e. a variable specified as the value of another variable) were not always being resolved. To ensure predictable and reliable pipeline execution, all nested variables will now be consistently resolved to their final value, up to one level of nesting. We understand that some pipelines may have been built to rely on the previous behavior. While we never intend to disrupt your workflows, this update was necessary to guarantee a consistent experience for all users and pipelines. For more information, read Using variables.

October 14πŸ”—

DesignerImprovements πŸ”§

  • Added a new Passphrase parameter (optional) to the Database Query component. Use the drop-down menu to select the secret definition storing your passphrase. If your private key is passphrase protected, you will also need to add a secret to store the passphrase. For more information, read Using Snowflake key-pair authentication.

October 9πŸ”—

APINew Endpoints πŸŽ‰

The following endpoints have been added to the Data Productivity Cloud REST API. You can use them to create a project within the Data Productivity Cloud. For more information, read Project Provisioning API and Generating a Personal Access Token.

Project provisioning

Method Endpoint Description
POST /v1/projects Create a project
POST /v1/projects/{projectId}/data-platform-connections Create a default warehouse connection
POST /v1/projects/{projectId}/environments Create an environment
POST /v1/projects/{projectId}/repositories Initialize a repository

October 7πŸ”—

DesignerNew features πŸŽ‰

  • Added a new IBM DB2 for i Load connector. This connector offers a new Incremental Load option, which allows you to only load new or updated records each time your pipeline runs.

October 6πŸ”—

Audit service APINew audit events πŸŽ‰


October 2πŸ”—

DesignerNew features πŸŽ‰

Designer:

  • When sampling data, you can now use an SQL query to filter the rows displayed in the table in the Sample data tab. You can filter for string, number, and date values, as well as using AND and OR operators to combine different filter criteria. For more information, read Filtering sampled data.

Connectors:

  • Added a new Workday Load connector, which offers full and incremental load options.

SeptemberπŸ”—

September 29πŸ”—

Data Productivity CloudNew features πŸŽ‰

Custom Connector:

  • You can now add multiple OAuth providers to a single Custom Connector endpoint.

September 24πŸ”—

DesignerNew features πŸŽ‰

Git actions:

Added a new Revert changes action to the branch menu. You can now revert your branch to a previous commit, which will undo all the changes that have been made to your branch after the selected commit. This new feature ensures that your commit history stays intact, and allows you to track all the changes made to your branch.


September 22πŸ”—

DesignerNew features πŸŽ‰

Schedules:

You can now run schedules on demand using the Run now option in the Schedules tab of your project. For more details, read Run now.


September 18πŸ”—

DesignerNew features πŸŽ‰

Connectors:

Added a new Jira Load connector, which offers full and incremental load options.


September 17πŸ”—

DesignerNew features πŸŽ‰

Connectors:

A new endpoint for accessing OpenLineage events has been added to the Data Productivity Cloud Flex connector.


September 11πŸ”—

DesignerImprovements πŸ”§

Maia:

  • Maia is now faster at completing a range of tasks, especially when adding multiple instances of the same component to the Designer canvas.
  • Maia can now make multiple instances of the same type of change at the same time. For example, when adding or updating table prefixes for multiple Table Output components in a pipeline.
  • Maia can now interact with scalar variables.
  • Maia can now use iterators on components.

Designer UI:

  • You can now add a component between two existing, connected components. Click on the connector line and then click + and choose the component you wish to add into the middle.

September 10πŸ”—

DesignerImprovements πŸ”§

Schedules:

  • Upgraded the maximum number of schedules that can be created per account from 500 to 1000.

September 8πŸ”—

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.113.15 to 2.113.23.
  • Fixed an issue where a failed snapshot would cause the pipeline to lose any streaming events that were created during the snapshot window.
  • Library updates and security fixes.

September 5πŸ”—

DesignerNew features πŸŽ‰

Connectors:

  • Added a Shopify Load connector, which offers full and incremental load options.

DesignerImprovements πŸ”§

  • Added the dbt Core Version property to the dbt Core component. This property allows you to select which version of dbt Core to use.

September 1πŸ”—

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.113.6 to 2.113.15.
  • Library updates and security fixes.

AugustπŸ”—

August 28πŸ”—

AgentsNew features πŸŽ‰


August 27πŸ”—

DesignerNew features πŸŽ‰

Two new connectors are available, both offering full and incremental load options:

APINew Endpoints πŸŽ‰

The following endpoints have been added to the Data Productivity Cloud REST API:

Environments

Method Endpoint Description
DELETE /v1/projects/{projectId}/environments/{environmentName} Delete an environment by name.

Lineage

Method Endpoint Description
GET /v1/lineage/events Retrieve OpenLineage-formatted lineage events within a time range, with pagination.

Agent allow list

Method Endpoint Description
GET /v1/agents/{agentId}/allowlist Get the current allow list for an agent.
PATCH /v1/agents/{agentId}/allowlist Add projects and/or environments to an agent's allow list.
PUT /v1/agents/{agentId}/allowlist Replace the allow list for an agent (overwrites existing).
DELETE /v1/agents/{agentId}/allowlist Remove a project and/or environment from an agent's allow list.

August 26πŸ”—

DesignerNew features πŸŽ‰

  • The Files panel now displays indicators next to the name of a pipeline if it has uncommitted changes, to help you quickly identify changes that you have made but not yet committed. The indicators are N for new pipelines, M for modified pipelines, and R for renamed pipelines.

August 22πŸ”—

DesignerNew features πŸŽ‰

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.113.5 to 2.113.6.
  • Fixed an issue in the DB2 connector when processing columns with datatype NCHAR or NVARCHAR.

August 21πŸ”—

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.112.12 to 2.113.5.
  • Library updates and security fixes.

August 20πŸ”—

MaiaImprovements πŸ”§

  • Maia has been updated with the following new features:
    • Automatic to-do lists: Maia now creates and displays a to-do list for the tasks you give it, helping you to track Maia's progress, and helping Maia to stay focused on the task at hand.
    • File mentions: You can now direct Maia to specific files in your project by mentioning them in your prompt. To do this, use @ followed by the file name, for example @myfile.md. Maia will then read the file and use it to inform its responses. You can also use @ to direct Maia to files you want it to update, or to provide specific context for the task at hand.
    • Textual file support: Maia can now read from and write to a range of textual files, including Markdown .md, SQL .sql, and Python .py files. This allows Maia to work directly with your code, create documentation, and edit other text-based files.

August 14πŸ”—

Data Productivity CloudImprovements πŸ”§

  • You can now specify a Policy ID when using Databricks Jobs Compute to execute Databricks transformation pipelines. This allows you to enforce governed compute usage in Databricks projects.

August 8πŸ”—

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.112.9 to 2.112.12.
  • Library updates and security fixes.

August 6πŸ”—

DesignerNew features πŸŽ‰

You can now use the following new connectors in Snowflake projects. Both new connectors offer Incremental Load options.

  • JDBC Load
  • MariaDB Load
    • We recommend using this connector instead of using the Database Query component to connect to MariaDB.

DesignerImprovements πŸ”§

Sampling data:

  • You can now export the data displayed in the Sample data tab in Designer as a CSV file. For more information, read Sampling output.

August 1πŸ”—

DesignerNew features πŸŽ‰

Maia:

You can now enable and disable Maia features in your account settings. For more information, read Edit account settings.


JulyπŸ”—

July 31πŸ”—

APINew Endpoints πŸŽ‰

  • New endpoints for creating and managing Streaming pipeline definitions have been added to the Data Productivity Cloud REST API. You can find a how-to guide for common tasks involving these endpoints here.

July 30πŸ”—

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.112.8 to 2.112.9.
  • Db2 for IBM i: add support for processing journal entries where a date is an empty string.

DesignerImprovements πŸ”§

For Databricks projects, once a Jobs Compute configuration is created, the dialog named after your configuration now displays key details at a glance, rather than just the Compute ID. Additionally, if an environment is associated with a Jobs Compute, its configuration is now shown directly in the environment list when selected. For more information, read Databricks Jobs Compute.


July 29πŸ”—

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.111.76 to 2.112.8.
  • Library updates and security fixes.
  • Fixed an issue where a Streaming agent out-of-memory (OOM) error would not be reported as a pipeline event correctly.

July 25πŸ”—

DesignerNew features πŸŽ‰

AI components:

  • A new Amazon OpenSearch Upsert component is now available. This component lets you upsert data converted into vector embeddings into an Amazon OpenSearch Service index using the Amazon OpenSearch Service API. Currently, this component only supports provisioned OpenSearch services, not serverless. Supported embedding providers include Amazon Bedrock and OpenAI.

APINew Endpoint πŸŽ‰

Method Endpoint Description
GET /v1/events Retrieves audit events in a given time range.

July 24πŸ”—

DesignerNew features πŸŽ‰

Connectors:

Added the following connectors, which offer Incremental Load:


July 22πŸ”—

DesignerNew features πŸŽ‰

Metadata sampling:

  • You can now use the Metadata tab to view the name, data type, and size of each column in a component's output. For more information, read Metadata.

Connectors:

Added the following Flex connectors for developing data pipelines:


July 18πŸ”—

DesignerNew features πŸŽ‰

Components:

  • The X Ads Load component, which supersedes the X Ads component, is now available for Snowflake projects. This component lets you fetch data using the X Ads API.

July 11πŸ”—

DesignerNew features πŸŽ‰

Components:

  • The Intersect component is now available for Databricks projects. This component lets you compare two datasets, and then return any rows that are identical in both datasets.

July 8πŸ”—

DesignerNew features πŸŽ‰

Git actions:

  • When committing changes in Designer, you can now choose which files to include in each commit. For more information, read How to commit.

Data Productivity CloudNew features πŸŽ‰

Amazon Redshift environments:

  • When creating an Amazon Redshift environment, you can now choose whether to use the credentials associated with the agent you selected when creating your project, or to override these credentials by entering new credentials. For more information, read Specify cloud data warehouse credentials.

July 2πŸ”—

DesignerNew features πŸŽ‰

Components:

  • Databricks Jobs Compute lets you run and/or schedule transformation pipelines using a Databricks Jobs Compute cluster.

JuneπŸ”—

June 27πŸ”—

MaiaNew features πŸŽ‰

Maia is now generally available in the Data Productivity Cloud.


June 26πŸ”—

DesignerImprovements πŸ”§

Components:


June 24πŸ”—

DesignerNew features πŸŽ‰

Git:

  • You can now search your commit history by commit message, timestamp, author, and hash to stay informed about all changes made in your branch.

APINew Endpoints πŸŽ‰

Method Endpoint Description
POST /v1/agents Creates a new agent using the specified configuration parameters.

June 23πŸ”—

DesignerNew features πŸŽ‰

  • Added a new auto-complete feature to some component properties, which makes it easier to include variables and warehouse functions in various fields, such as Calculator expressions or Send Email messages.

June 19πŸ”—

MaiaNew features πŸŽ‰

Maia is now live in the Data Productivity Cloud.

Maia in Designer:

  • Maia is your always-on agentic data team, enabled via natural language prompting. You can collaborate with Maia directly in Designer to deliver data faster and to automate repetitive tasks.
  • Maia supports both transformation and orchestration pipelines. Support for orchestration pipelines is currently in public preview.
  • Maia can build, run, and update data pipelines for you, all via your natural language prompts.
  • Maia can advise you about designing optimal data pipelines in Designer. Maia can recommend next steps in your pipeline flow and even clarify any queries you have about your pipelines and the components therein.
  • Maia can explore the cloud data warehouse that you have connected to the environment in your Data Productivity Cloud project, examining data structures and relationships to help you create the pipelines that lead to the most value.
  • Maia will suggest actions, including version control operations such as committing your branch's current state and pushing those commits to your remote repository.
  • Maia uses tools to perform tasks in your Designer workspace based on your prompts. These tools let Maia go beyond chat and perform real, interactive actions.
  • Maia works across multiple pipelines.

Connectors:

  • Maia can create custom connectors for you. Maia works with any API that has publicly viewable documentation pages (ideally static HTML) that list and define available endpoints, methods, and parameters.

Observability dashboard:

  • Maia can perform root cause analysis on a failed pipeline and will suggest steps you can take to get your pipeline working again.
  • Maia can detect anomalies in your scheduled pipeline runs, such as sudden spikes in execution time. Maia does this by analyzing past scheduled pipelines, and will provide visual indicators for significant behavioral deviations.

DesignerNew features πŸŽ‰

New connector:

  • Added a new Salesforce Load connector, which offers Incremental Load options.

DesignerImprovements πŸ”§

Components:

  • The JDBC Table Metadata to Grid component now supports an additional authentication method when the database type is set to Snowflake. Users can now choose Key Pair Authentication.

Note

Key pair authentication is the recommended authentication method, as Snowflake plans to block single-factor password authentication by November 2025.


June 18πŸ”—

Data Productivity CloudNew features πŸŽ‰

New connector:

  • Added a new Oracle Unload from Snowflake connector, which enables you to unload data from your Snowflake data warehouse to an Oracle database.

June 16πŸ”—

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.111.64 to 2.111.76.
  • Library updates and security fixes.
  • Oracle pipelines will now always include BLOB and CLOB data in change events emitted during the snapshot period, regardless of the lob.enabled advanced setting.
  • Fixed an issue where an Oracle connection could pass validation but fail to retrieve source information.
  • Fixed an issue for SQL Server pipelines where the agent would incorrectly emit a snapshot error event during a snapshot.

June 11πŸ”—

DesignerNew features πŸŽ‰

Components:


June 10πŸ”—

Data Productivity CloudNew features πŸŽ‰

Components:

Data Productivity CloudImprovements πŸ”§

  • New and improved filters on the pipeline runs of the pipeline observability dashboard. Users can now filter by Status, Project, and Environment.

MayπŸ”—

May 29πŸ”—

DesignerNew features πŸŽ‰

  • Monitor the health of your data pipelines with real-time Pipeline notifications. These notifications keep users informed about pipeline failures in projects and environments they can access. Alerts are sent via email for scheduled and API-triggered runs, but not for manual executions.
  • The new View commit history Git action allows you to see the previous commits you have made in your branch. For more details, read View commit history.

AgentsNew features πŸŽ‰

The Data Productivity Cloud is now available for running in your Snowflake account using Snowpark Container Services.


May 28πŸ”—

Data Productivity CloudNew features πŸŽ‰

Matillion has released a new navigation and Designer user experience, including the following improvements:

  • New left navigation enables you to move between key areas of the Data Productivity Cloud from anywhere in the platform.
  • Profile & Account icon in the left navigation provides quick access to user and account settings.
  • Expanded Designer canvas reduces visual clutter, providing more space for pipeline creation.
  • Resizable Files and Schemas panels can be docked or positioned anywhere on the canvas.
  • Shorter load times in Designer ensure faster navigation and reduced latency.

May 23πŸ”—

Data Productivity CloudNew features πŸŽ‰

Schedules:

  • When viewing schedules in the Your projects list, you can now filter the displayed schedules by their environment.

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.111.47 to 2.111.64.
  • Reverted Snowflake JDBC driver version to 3.21.0.
  • Library updates and security fixes.

May 22πŸ”—

DesignerNew features πŸŽ‰

Secret definitions:

  • You can now create secret definitions in Designer when configuring components, without needing to leave the canvas. For information about creating secret definitions, read Secrets and secret definitions.

Environments:

  • You can now delete environments, provided that there are no active schedules running pipelines in the environment, and it is not the default environment for any branches. For more information, read Delete an environment.

Components:

  • You can now set an optional Message field as part of the post-processor for any component. This means you can add a custom, descriptive message that clearly communicates what's happening at each step of your pipeline, tailored to your specific requirements.

May 19πŸ”—

DesignerNew features πŸŽ‰

Secret definitions:

  • If you use a Full SaaS deployment model, you can now update the value of secret definitions, for example if a password changes. For more information, read Update a secret definition.

May 16πŸ”—

StreamingNew features πŸŽ‰


May 15πŸ”—

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.111.40 to 2.111.47.
  • Improved the pipeline activity reporting to ensure consistent consumption reporting, especially during times when the source database has limited changes going through.
  • Library updates and security fixes.

DesignerImprovements πŸ”§

  • The Unstructured Flex connector has been renamed to Unstructured.io. Two new POST endpoints have been added: Cancel Job and Run Workflow.

May 13πŸ”—

DesignerNew features πŸŽ‰

AI components:

  • Added a Cortex Finetune component, which enables users to fine-tune large language models (LLMs) using Snowflake Cortex.

Components:

DesignerImprovements πŸ”§

  • A new data type called Variant has been added for Databricks in the Create Table component. This data type is supported in Databricks Runtime version 15.4 and above.

May 8πŸ”—

DesignerNew features πŸŽ‰

New connector:

  • Added a new Google Sheets connector, which lets you query the Google Sheets API to retrieve data and load it into a table. You can then use transformation components to enrich and manage the data in permanent tables.
    • The Google Sheets Query connector is no longer available for creating data pipelines, as it has been replaced by the new Google Sheets connector.

May 6πŸ”—

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.111.26 to 2.111.40.
  • Db2 for IBM i: Improve management of JDBC connections.
  • Library updates and security fixes.

May 2πŸ”—

AgentsNew features πŸŽ‰

  • Agents can now be put into a Paused state, which gracefully puts the agent into a safe state when you need to perform maintenance work such as upgrading the version or updating the configuration to add new drivers.
  • We have introduced agent version tracks, with different release cadences. You can choose a version track based on how the agent update cadence fits with your operational practices, and how eager you are to take advantage of new features.
  • Agents can be updated either automatically, as soon as a new release is available, or manually, at a time of your choosing. Read our support policy for details.
  • A new agents public API lets you list agent details and also send commands such as restart to the agent.

AprilπŸ”—

April 28πŸ”—

DesignerNew features πŸŽ‰

New connector:

  • Added a new Flex connector for developing data pipelines with data from Unstructured.io

April 24πŸ”—

DesignerNew features πŸŽ‰

New connector:

  • Added a new connector for SAP NetWeaver that lets you connect to SAP to access available data sources. You can then use transformation components to enrich and manage the data in permanent tables.

Git actions

  • Added a new View pull requests action that allows you to view the pull requests in your external Git repository if your team uses a pull request approval workflow. For more information, read View pull requests.
  • After pushing changes to the remote branch, you can click the link in the success notification to create a pull request in your external Git repository.

Components:

  • A new data type called Variant has been added for Databricks in the Convert Type transformation component. This data type is supported in Databricks Runtime version 15.4 and above.

DesignerImprovements πŸ”§

Components:

  • The S3 Attachment parameter for the Send Email component has now been updated from a string to a file editor.

April 17πŸ”—

AI components:

  • The Google Vertex AI Prompt component is now available. This component uses a Google Gemini large language model (LLM) to provide responses to user-composed prompts. The component takes one or more inputs in the form of text, image, audio, or video, combines the inputs with user prompts, and sends this data to the LLM for processing.

April 16πŸ”—

APINew Endpoint πŸŽ‰

The following endpoint has been added to the Data Productivity Cloud REST API:

Secret references

Method Endpoint Description
DELETE /v1/projects/{projectId}/secret-references/{secretReferenceName} Delete a secret reference

April 11πŸ”—

DesignerNew features πŸŽ‰Improvements πŸ”§

  • You can now view and resolve merge conflicts using the Merge from branch Git action. For more information, read Resolving merge conflicts.
  • The S3 Location parameter in Send Email has been updated to S3 Attachment.

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.111.18 to 2.111.26.
  • Db2 for IBM i: Add option to disable use of unique indexes.
  • Db2 for IBM i: Default connection date time property to iso.
  • Added periodic diagnostic logging.
  • Library updates and security fixes.

April 4πŸ”—

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.111.6 to 2.111.18.
  • Db2 for IBM i: Set the default batch size value to 512, and fixed an issue where an incorrect journal receiver library could be used.
  • Library updates and security fixes.

April 2πŸ”—

DesignerNew features πŸŽ‰

Components:

  • Copy Grants: In Snowflake, the COPY GRANTS clause transfers privileges from the original object such as a table, view, schema, or database, to a new object when replacing or cloning, ensuring consistent access control and eliminating manual reassignment. The COPY GRANT clause information has been added to the following documentation:

  • Transient table type: The Create Table component for Snowflake now supports the new table type, Transient. This table type holds data indefinitely but can't be restored.

  • Attach a file using the Send Email component. The attachment must be a single file located in an S3 bucket.

MarchπŸ”—

March 27πŸ”—

DesignerNew features πŸŽ‰

Schedules:

  • Added a new Allow concurrent schedule runs option when creating a schedule that lets you choose whether a scheduled pipeline run is skipped if the previous scheduled run is still in progress.

March 25πŸ”—

DesignerNew features πŸŽ‰

Connectors:

  • Added a Coalesce Flex connector for developing pipelines.

StreamingImprovements πŸ”§

  • Updated the Streaming agent version from 2.110.4 to 2.111.6. This is the minimum agent version required for the new snapshot feature.
  • New snapshot feature introduces recoverability and flexibility. This replaces initial and on-demand snapshot options.
  • New dashboard layout makes it easier to navigate to relevant information.
  • Upgrade of the IBMi DB2 JDBC driver in the Agent.
  • Library updates and security fixes.

March 21πŸ”—

DesignerNew features πŸŽ‰

Connectors:

  • Added a Zoom Flex connector for developing pipelines.

March 20πŸ”—

DesignerNew features πŸŽ‰

AI:

  • Added an ML Classification component, which uses the Snowflake Classification machine learning (ML) function to sort data into different classes using patterns detected in training data. This component is available in public preview.

March 14πŸ”—

DesignerNew features πŸŽ‰

  • New and updated roles and permissions for projects and environments have been introduced: Owner, Contributor, Viewer, and None, replacing Admin, User, Read Only, and No Access, respectively. Additionally, a new environment role, Runner, has been introduced. All roles and permissions are documented in the following.

New connector:

  • X Ads, which lets you query the X Ads (formerly Twitter Ads) API to retrieve data and load it into a table. You can then use transformation components to enrich and manage the data in permanent tables.

Maia (formerly Copilot):


March 13πŸ”—

DesignerNew features πŸŽ‰

AI components:

  • Add a Cortex Multi Prompt component, which uses Snowflake Cortex to receive a prompt and then generate multiple responses (completions) using your chosen supported language model. This component is available in public preview.

CDCImprovements πŸ”§

  • Updates to the CDC agent version.
    • From 2.108.1 to 2.110.3.
    • From 2.110.3 to 2.110.4.
  • Reverted the Snowflake JDBC driver to 3.22.0 to mitigate an issue where the first Snowflake request on an agent could fail.
  • When using a storage target or Snowflake with the "Source Database & Schema" prefix, setting the database value is now Root where previously the Db2 system name would have been used.
  • Library updates and security fixes.

March 5πŸ”—

DesignerImprovements πŸ”§

  • System variables and post-processing are now generally available:

    • System Variables can now be used in component parameters, Python, Bash, and SQL scripts, in addition to post-processing.
    • Users can access shared and public scalar variables from a child pipeline using the post-processing Update Scalar property in Run Transformation, Run Orchestration, and Run Shared Pipeline components.
    • System variables are read-only and cannot be modified manually. Instead, system variable values are updated automatically during component executions.
    • System variables can be directly used in Python Pushdown, manually assigned in Bash Pushdown, and used inline in SQL script (orchestration) and SQL (transformation) components, with mapping required for external SQL.
    • New child pipeline system variable syntax is ${sysvar.childPipeline.vars.<varname>} and allows referencing variables from child pipelines, e.g., ${sysvar.childPipeline.vars.var1}.
  • Lineage now includes orchestration pipelines, providing a comprehensive view of data origins. This expansion introduces support for additional connectors and offers key benefits such as audit and compliance, impact analysis, and faster debugging.


FebruaryπŸ”—

February 28πŸ”—

DesignerImprovements πŸ”§

  • Added support for Snowflake "Named" stages to third-party connectors (Asana, GitHub, Mailchimp, etc.).
  • Added support for Snowflake storage integration stage access strategies when using AWS or Azure as the staging cloud platform with third-party connectors (Asana, GitHub, Mailchimp, etc.).

February 26πŸ”—

CDCImprovements πŸ”§

  • Updated the CDC agent version from 2.107.4 to 2.108.1.
  • Library updates and security fixes.
  • Fixed an issue where the agent could not compact legacy history.dat files.

February 25πŸ”—

Data Productivity CloudImprovements πŸ”§

  • Updated account admin privileges so that account admins can enable and disable the ability for each user to create projects within an account.

February 20πŸ”—

DesignerImprovements πŸ”§

  • Updated the SQL Script component for Snowflake and Databricks projects. You can now choose whether to specify which project and pipeline variables to declare as SQL variables upon component execution, or to declare all as SQL variables. By default, no project or pipeline variables are declared as SQL variables prior to an SQL Script component execution.
  • Updated the OpenAI Prompt, Azure OpenAI Prompt, and Amazon Bedrock Prompt components to add an Append setting to the Create Table Options property.

February 18πŸ”—

StreamingImprovements πŸ”§New features πŸŽ‰


February 17πŸ”—

CDCImprovements πŸ”§

  • Updated the CDC agent version from 2.102.27 to 2.107.4.
  • Updated the matillion.compact-history default value to true.
  • Library updates and security fixes.

February 13πŸ”—

AgentsNew features πŸŽ‰

  • Hydrid SaaS agents running on AWS can now be connected via AWS PrivateLink, an AWS service that allows you to connect via a secure, private connection. Using AWS PrivateLink, no traffic is exposed to the public Internet when it travels between the Data Productivity Cloud and your own AWS virtual private cloud.

February 11πŸ”—

APINew Endpoints πŸŽ‰

The following new endpoints have been added to the Data Productivity Cloud Flex connector:

  • List All Schedules
  • Create Schedule
  • Get Schedule
  • List Artifacts
  • Get Artifact
  • Promote Artifact
  • Pipeline Executions
  • List Custom Connectors
  • List Flex Connectors
  • List All Secret References
  • Create Secret Reference

February 7πŸ”—

DesignerImprovements πŸ”§


February 6πŸ”—

DesignerNew features πŸŽ‰Improvements πŸ”§

Agents:

New orchestration components:

Database transactions for Snowflake and Amazon Redshift allow multiple database changes to be executed as a single, logical unit of work. With the introduction of database transactions in the Data Productivity Cloud, the following orchestration components have been added:

  • The Begin component starts a new transaction in the database.
  • The Commit component completes a transaction, making all changes since the most recent Begin component visible to other users.
  • The Rollback component cancels a transaction, undoing all changes made since the most recent Begin component. These changes remain invisible to other users.

New connector:

  • Added the LinkedIn Ads Flex connector for developing data pipelines.

February 4πŸ”—

DesignerImprovements πŸ”§

Components:

  • Improved the dbt Core component by adding two new properties:
    • dbt Project Location: Use this property to clarify whether the dbt project is located in an external Git repository that is not connected to your Data Productivity Cloud project, or the dbt project is already hosted in the Git repository connected to your Data Productivity Cloud project.
    • dbt Project: This property will list any dbt projects that reside in the Git repository connected to your Data Productivity Cloud project. A directory is a dbt project when it includes a dbt_project.yml file.

February 3πŸ”—

APINew Endpoints πŸŽ‰

The following endpoints have been added to the Data Productivity Cloud REST API:

Schedules

Method Endpoint Description
GET /v1/projects/{projectId}/schedules List all schedules for a project
POST /v1/projects/{projectId}/schedules Create a new schedule
DELETE /v1/projects/{projectId}/schedules/{scheduleId} Deletes the schedule by the given schedule ID
GET /v1/projects/{projectId}/schedules/{scheduleId} Get a schedule summary for a given schedule ID
PATCH /v1/projects/{projectId}/schedules/{scheduleId} Update the schedule by the given schedule ID and schedule request

Artifacts

Method Endpoint Description
GET /v1/projects/{projectId}/artifacts Get a list of artifacts
PATCH /v1/projects/{projectId}/artifacts Enable or disable an artifact
POST /v1/projects/{projectId}/artifacts Create an artifact
GET /v1/projects/{projectId}/artifacts/details Get an artifact by a given version name
POST /v1/projects/{projectId}/artifacts/promotions Promote an artifact to a specific environment

Connectors

Method Endpoint Description
GET /v1/custom-connectors Lists custom connector profiles for the requesting account
GET /v1/flex-connectors Lists Flex connector profiles

Secret References

Method Endpoint Description
GET /v1/projects/{projectId/secret-references} List all secret references
POST /v1/projects/{projectId/secret-references}/{secretReferenceName} Create a secret reference

JanuaryπŸ”—

January 29πŸ”—

DesignerImprovements πŸ”§

Orchestration:

Transformation:

CDCImprovements πŸ”§

  • Updated the CDC agent version from 2.102.1 to 2.102.27.
  • Library updates and security fixes.

January 27πŸ”—

DesignerImprovements πŸ”§

Components:

  • Updated Flex and custom connectors to include a Load Selected Data parameter. This setting lets you choose whether to return the entire payload (default) or only selected data objects in your API response.
  • Updated the Python Pushdown component. When you set a Python version, the Packages parameter is updated to show packages supported by that Python version. Currently the Python Pushdown component supports Python versions 3.9, 3.10 (default), 3.11, and 3.12.

January 23πŸ”—

DesignerImprovements πŸ”§

Designer UI:

  • Updated the command palette to include the following action commands, all of which can be activated by typing SHIFT + >:
    • Run Pipeline
    • Validate Pipeline
    • Canvas: Add component
    • Canvas: Zoom in
    • Canvas: Zoom out
    • Canvas: Zoom to fit

To access the command palette, type CMD + k or CTRL + k or click the magnifying glass button in the upper-right of the UI. Type CMD + SHIFT + k or CTRL + SHIFT + k to open the action commands list directly.

Components:

  • Updated the Database Query component to include a Fetch Size parameter, which lets you specify the batch size of rows to fetch at a time, for example, 500.

January 22πŸ”—

DesignerNew features πŸŽ‰

DataOps:

  • Artifacts can now be created to ensure that the version of a pipeline that you are releasing to production is the same version that you have tested in your environments. An artifact is an immutable collection of resources (such as pipelines and script files) that is deployed to your chosen environment when you publish. The following guides contain more information about artifacts and how they relate to schedules:
    • Artifacts explains how to create, view, and manage artifacts in the Data Productivity Cloud.
    • Schedules explains how artifacts fit into the process of publishing your changes and scheduling pipelines.
  • The process of pushing and publishing local changes using Git push has been changed. For more details about how to push and publish your local changes to create a schedule, read Git push.
  • For more information on how the Data Productivity Cloud can help you to take an innovative approach to DataOps, read DataOps in the Data Productivity Cloud.

Networks:

  • Customers using a Full SaaS Data Productivity Cloud solution can use the Database Query and RDS Query components to access data sources within their infrastructure using a configured SSH tunnel. This is accessed via the new Networks tab. Read our Networks guide to get started.

January 20πŸ”—

DesignerNew features πŸŽ‰

Designer:

  • System variables are now available in public preview. System variables provide component execution metadata, such as row count, execution duration, and more.
  • Post-processing is now available in public preview. Each orchestration pipeline component now includes a Post-processing tab, where you can update scalar variables and map your existing user-defined pipeline variables and project variables to system variables.

January 16πŸ”—

DesignerNew features πŸŽ‰

Migration tool:

  • The migration tool for any current Matillion ETL users who wish to migrate their data pipelines to the Data Productivity Cloud is now generally available. Migrating from Matillion ETL to the Data Productivity Cloud is a complex process, and we urge you to read all of the documentation listed here:
    • Migration considerations focuses on what a migration will involve and the steps required before getting started.
    • Migration feature parity details the current differences between Matillion ETL and the Data Productivity Cloud that you should consider.
    • Migration process explains how to export your jobs from Matillion ETL and import them to the Data Productivity Cloud.
    • Migration mappings explains how you can resolve issues when migrating shared jobs in Matillion ETL to the shared pipelines feature in the Data Productivity Cloud.

Data Productivity CloudNew features πŸŽ‰

  • Lineage filtering is now available for customers who want to enhance the clarity of data flows in their Data Productivity Cloud pipelines.

January 14πŸ”—

DesignerNew features πŸŽ‰

Orchestration:

Data Productivity CloudNew features πŸŽ‰

  • The Super Admin role is now available in the Data Productivity Cloud for new and existing accounts. This new role gives users with the role access to everything in that accountβ€”including all users, projects, environments, and more. For all new accounts, this is applied automatically. For more information, read Registration. For existing accounts, submit a support ticket with the account number and the user's email address to request the role assignment.

CDCImprovements πŸ”§

  • Updated the CDC agent version from 2.101.2 to 2.102.1.
  • Improved logging around the compaction of a pipeline's schema history.
  • Library updates and security fixes.

January 8πŸ”—

DesignerNew features πŸŽ‰

Code Editor:

  • Added a new feature, Code Editor, to Designer. Code Editor introduces an improved high code experience to the Data Productivity Cloud. Code Editor is powered by the Monaco Editor, which also powers Visual Studio Code and other editors across the web.
  • Added the ability to create .sql and .py files in Designer.
  • Added the ability to edit files in Code Editor.
  • Improved the SQL Script and Python Pushdown components by adding a Script Location property. With this property, you can decide whether to run an SQL or Python script from directly within the component (current behaviour) or choose to run an existing .sql or .py file in your project instead. Any .sql and .py files in the repository you have connected to your project can be edited using Code Editor and run within SQL Script or Python Pushdown components.

Note

Python Pushdown is currently only available for Snowflake projects.