2025 changelogπ
Here you'll find the 2025 changelog for the Data Productivity Cloud. Just want to read about new features? Read our New Features blog.
Decemberπ
December 18π
DesignerNew features π
- A new Contributor role has been added for Environments. This is part of a set of changes to roles and permissions over the next few months. For more information, read Environment roles.
December 17π
DesignerNew features π
- You can now create unlimited API credentials and assign each credential a specific role at the account level. This update enables safer integrations and a cleaner separation of duties for your automation. For more information, read Account roles for API credentials.
Components:
- Added a new ORC Load component for Snowflake that loads data directly into a Snowflake table from one or more ORC files stored in Snowflake Managed Storage, Amazon S3, Google Cloud Storage, or Azure Blob Storage. For more information, read ORC Load.
- Added a new AVRO Load component for Snowflake that loads data directly into a Snowflake table from one or more AVRO files stored in Snowflake Managed Storage, Amazon S3, Google Cloud Storage, or Azure Blob Storage. For more information, read AVRO Load.
- Added a new Parquet Load component for Snowflake that loads data directly into a Snowflake table from one or more Parquet files stored in Snowflake Managed Storage, Amazon S3, Google Cloud Storage, or Azure Blob Storage. For more information, read Parquet Load.
December 9π
DesignerNew features π
- You can now quickly locate any file and folder in your project using the new search bar in the Files Panel.
Components:
- Added a new Show to Grid component for Snowflake, which allows you to save the results of a Snowflake SHOW command into a grid variable for further processing within your orchestration pipeline.
BillingNew features π
- A new Same as billing checkbox is now available in the Shipping Address section of the credit card payment method, allowing you to automatically populate shipping details using your billing address.
December 8π
DesignerNew features π
- You can now set default session parameters for Snowflake connections when you create an environment. Session parameters can be used to modify the behavior of your Snowflake connection.
December 4π
StreamingImprovements π§
- Fixed a bug to ensure that a pipeline failure email will be sent if a pipeline fails to restart when scheduled, and the pipeline is opted in to receive error emails.
December 2π
DesignerNew features π
Connectors:
- Added a new Snowflake Load connector, which enables you to fetch data from a Snowflake database. This new connector offers full and incremental load options, allowing you to only fetch new and updated rows of data.
Novemberπ
November 27π
DesignerNew features π
Matillion MCP server:
- The Matillion MCP server is now publicly available. The MCP server offers a secure, standardized way for Large Language Models (LLMs) to interact with your Data Productivity Cloud resources. By configuring the MCP server, you can enable AI assistants to help with tasks such as monitoring pipeline executions, analyzing credit consumption, and triggering pipeline runs
Maia:
- You can now rename and delete previous conversations with Maia to make it easier to find useful previous chats, or clear conversations that you no longer need.
November 26π
DesignerNew features and endpoints π
Agents:
- You can now restrict which agents are available to a specific project or environment, and ensure that only specific environments are allowed to use agents. For more information, read Restricting agents.
API:
- The following endpoints have been added to the Data Productivity Cloud REST API:
Agents
| Method | Endpoint | Description |
|---|---|---|
| DELETE | /v1/agents/{agentId}/allowlist | Remove project from agent allow list. |
| GET | /v1/agents/{agentId}/allowlist | Get the agent's allow list. |
| PATCH | /v1/agents/{agentId}/allowlist | Add projects and/or environments to an agent's allow list. |
| PUT | /v1/agents/{agentId}/allowlist | Set an agent's allow list. |
November 21π
DesignerNew features π
- You can now use slash commands when interacting with Maia. These commands simplify complex prompts into single-word commands that instruct Maia to document, explain, fix, optimize and annotate files in your project.
November 14π
DesignerNew features π
Maia:
- Maia now offers plan mode, which allows you to chat to Maia about your pipelines without making any changes to their configuration. In plan mode, Maia will suggest a multi-step plan to achieve your goal, but will not make any changes until you approve the plan.
- Maia can now move, copy and rename files in your project. For example, you can now tell Maia to "Make a copy of my 'customer_data' pipeline, rename it to 'customers', and move it into my 'customer_data' folder".
- Maia now supports grid variables. As a result, it can create grid variables with defined columns and default values, build pipelines that use grid variables, and help to configure components such as Grid Iterator and Append to Grid.
November 13π
AgentsNew features π
- Added the ability to upload external database drivers to the Matillion agent for Snowflake.
DesignerNew features π
Connectors:
- Added a new JSON Load component for Snowflake that loads data directly into a Snowflake table from one or more JSON files stored in Snowflake Managed Storage, Amazon S3, Google Cloud Storage, or Azure Blob Storage. For more information, read JSON Load.
November 10π
StreamingImprovements π§
- Updated the Streaming agent version from
2.113.69to2.113.75. - Improve error messages shown in the pipeline event log if the Streaming pipeline stops with an error.
- Library updates and security fixes.
November 7π
DesignerNew features π
Connectors:
- Added an authentication guide to connect to a Fabric Lakehouse SQL analytics endpoint using the Microsoft SQL Server Load component. For more information, read Microsoft Fabric Lakehouse authentication guide.
November 6π
DesignerNew features π
Maia:
We have added two new Maia features. You can now ask Maia to:
- Visualize a sample of your data directly in the chat interface, so you can spot trends at a glance. To use this feature, ask Maia to create a chart visualizing the data in a component in your pipeline. For more details, read Visualizing data.
- Search files in your project and provide information about the content of your files. For example, you can ask Maia to tell you how you have configured specific components, or which pipelines contain data from a specific table. For more information, read File exploration.
- When creating a custom connector with Maia, you can now upload a PDF, JSON, or YAML file, including Postman collections and OpenAPI specifications.
Pipeline quality review:
- Added a new Review button to the Designer canvas. The Review feature lets you check whether your pipelines meet a defined set of rules, so you can be sure that your pipelines align with your organization's standards. For more information, read Reviewing pipeline quality.
Octoberπ
October 30π
DesignerNew features π
Connectors:
- Added the Open Exchange Rates Flex connector for developing pipelines.
October 28π
DesignerNew features π
Connectors:
- Added a new CSV Load component for Snowflake that loads data directly into a Snowflake table from one or more CSV files stored in Snowflake Managed Storage, Amazon S3, Google Cloud Storage, or Azure Blob Storage. For more information, read CSV Load.
October 23π
DesignerNew features π
Connectors:
- Added three new Oracle Fusion Cloud connectors for Snowflake projects. All three connectors offer full and incremental load options, allowing you to only fetch new and updated rows of data. For more information, read:
- Added support for Oracle Wallet in the JDBC Load connector, enabling secure mutual TLS connections to Oracle Autonomous Databases. For more information, read Oracle Autonomous Database authentication guide.
AgentsImprovements π§
- You can now include
.sso(Oracle Wallet) files when uploading external drivers to the agent. For more information, read Uploading external drivers to the agent
StreamingImprovements π§
- Updated the Streaming agent version from
2.113.23to2.113.69. - Library updates and security fixes.
October 22π
DesignerImprovements π§
- Maia now saves your recent conversations, making it easy to continue where you left off or revisit previous work. To resume a previous conversation with Maia, either click the conversation in the Branch conversations section of the Maia interface, or click Chat history in the top right to view a list of recent conversations. For more information, read Sessions and tools in Maia.
October 21π
DesignerNew features π
Connectors:
- Added a new NetSuite SuiteAnalytics Load connector, which offers full and incremental load options. This component is currently only available for Snowflake projects.
October 16π
DesignerImprovements π§
- Added the ability to include descriptions with pipelines, to improve collaboration and clarity when sharing pipelines. You can now add a description to your pipelines during creation, and at the point of sharing. Descriptions can be a maximum of 1000 characters.
- We have recently addressed an inconsistency where nested variables (i.e. a variable specified as the value of another variable) were not always being resolved. To ensure predictable and reliable pipeline execution, all nested variables will now be consistently resolved to their final value, up to one level of nesting. We understand that some pipelines may have been built to rely on the previous behavior. While we never intend to disrupt your workflows, this update was necessary to guarantee a consistent experience for all users and pipelines. For more information, read Using variables.
October 14π
DesignerImprovements π§
- Added a new Passphrase parameter (optional) to the Database Query component. Use the drop-down menu to select the secret definition storing your passphrase. If your private key is passphrase protected, you will also need to add a secret to store the passphrase. For more information, read Using Snowflake key-pair authentication.
October 9π
APINew Endpoints π
The following endpoints have been added to the Data Productivity Cloud REST API. You can use them to create a project within the Data Productivity Cloud. For more information, read Project Provisioning API and Generating a Personal Access Token.
Project provisioning
| Method | Endpoint | Description |
|---|---|---|
| POST | /v1/projects | Create a project |
| POST | /v1/projects/{projectId}/data-platform-connections | Create a default warehouse connection |
| POST | /v1/projects/{projectId}/environments | Create an environment |
| POST | /v1/projects/{projectId}/repositories | Initialize a repository |
October 7π
DesignerNew features π
- Added a new IBM DB2 for i Load connector. This connector offers a new Incremental Load option, which allows you to only load new or updated records each time your pipeline runs.
October 6π
Audit service APINew audit events π
- The Audit service API now includes event types for accounts.
October 2π
DesignerNew features π
Designer:
- When sampling data, you can now use an SQL query to filter the rows displayed in the table in the Sample data tab. You can filter for string, number, and date values, as well as using
ANDandORoperators to combine different filter criteria. For more information, read Filtering sampled data.
Connectors:
- Added a new Workday Load connector, which offers full and incremental load options.
Septemberπ
September 29π
Data Productivity CloudNew features π
Custom Connector:
- You can now add multiple OAuth providers to a single Custom Connector endpoint.
September 24π
DesignerNew features π
Git actions:
Added a new Revert changes action to the branch menu. You can now revert your branch to a previous commit, which will undo all the changes that have been made to your branch after the selected commit. This new feature ensures that your commit history stays intact, and allows you to track all the changes made to your branch.
September 22π
DesignerNew features π
Schedules:
You can now run schedules on demand using the Run now option in the Schedules tab of your project. For more details, read Run now.
September 18π
DesignerNew features π
Connectors:
Added a new Jira Load connector, which offers full and incremental load options.
September 17π
DesignerNew features π
Connectors:
A new endpoint for accessing OpenLineage events has been added to the Data Productivity Cloud Flex connector.
September 11π
DesignerImprovements π§
Maia:
- Maia is now faster at completing a range of tasks, especially when adding multiple instances of the same component to the Designer canvas.
- Maia can now make multiple instances of the same type of change at the same time. For example, when adding or updating table prefixes for multiple Table Output components in a pipeline.
- Maia can now interact with scalar variables.
- Maia can now use iterators on components.
Designer UI:
- You can now add a component between two existing, connected components. Click on the connector line and then click + and choose the component you wish to add into the middle.
September 10π
DesignerImprovements π§
Schedules:
- Upgraded the maximum number of schedules that can be created per account from 500 to 1000.
September 8π
StreamingImprovements π§
- Updated the Streaming agent version from
2.113.15to2.113.23. - Fixed an issue where a failed snapshot would cause the pipeline to lose any streaming events that were created during the snapshot window.
- Library updates and security fixes.
September 5π
DesignerNew features π
Connectors:
- Added a Shopify Load connector, which offers full and incremental load options.
DesignerImprovements π§
- Added the dbt Core Version property to the dbt Core component. This property allows you to select which version of dbt Core to use.
September 1π
StreamingImprovements π§
- Updated the Streaming agent version from
2.113.6to2.113.15. - Library updates and security fixes.
Augustπ
August 28π
AgentsNew features π
- You can now test the outbound connection from a Hybrid SaaS agent to a specified host and port.
August 27π
DesignerNew features π
Two new connectors are available, both offering full and incremental load options:
APINew Endpoints π
The following endpoints have been added to the Data Productivity Cloud REST API:
Environments
| Method | Endpoint | Description |
|---|---|---|
| DELETE | /v1/projects/{projectId}/environments/{environmentName} | Delete an environment by name. |
Lineage
| Method | Endpoint | Description |
|---|---|---|
| GET | /v1/lineage/events | Retrieve OpenLineage-formatted lineage events within a time range, with pagination. |
Agent allow list
| Method | Endpoint | Description |
|---|---|---|
| GET | /v1/agents/{agentId}/allowlist | Get the current allow list for an agent. |
| PATCH | /v1/agents/{agentId}/allowlist | Add projects and/or environments to an agent's allow list. |
| PUT | /v1/agents/{agentId}/allowlist | Replace the allow list for an agent (overwrites existing). |
| DELETE | /v1/agents/{agentId}/allowlist | Remove a project and/or environment from an agent's allow list. |
August 26π
DesignerNew features π
- The Files panel now displays indicators next to the name of a pipeline if it has uncommitted changes, to help you quickly identify changes that you have made but not yet committed. The indicators are N for new pipelines, M for modified pipelines, and R for renamed pipelines.
August 22π
DesignerNew features π
- Added new endpoints to the Data Productivity Cloud connector. These include endpoints for agents, schedules, and audit events.
StreamingImprovements π§
- Updated the Streaming agent version from
2.113.5to2.113.6. - Fixed an issue in the DB2 connector when processing columns with datatype NCHAR or NVARCHAR.
August 21π
StreamingImprovements π§
- Updated the Streaming agent version from
2.112.12to2.113.5. - Library updates and security fixes.
August 20π
MaiaImprovements π§
- Maia has been updated with the following new features:
- Automatic to-do lists: Maia now creates and displays a to-do list for the tasks you give it, helping you to track Maia's progress, and helping Maia to stay focused on the task at hand.
- File mentions: You can now direct Maia to specific files in your project by mentioning them in your prompt. To do this, use
@followed by the file name, for example@myfile.md. Maia will then read the file and use it to inform its responses. You can also use@to direct Maia to files you want it to update, or to provide specific context for the task at hand. - Textual file support: Maia can now read from and write to a range of textual files, including Markdown
.md, SQL.sql, and Python.pyfiles. This allows Maia to work directly with your code, create documentation, and edit other text-based files.
August 14π
Data Productivity CloudImprovements π§
- You can now specify a Policy ID when using Databricks Jobs Compute to execute Databricks transformation pipelines. This allows you to enforce governed compute usage in Databricks projects.
August 8π
StreamingImprovements π§
- Updated the Streaming agent version from
2.112.9to2.112.12. - Library updates and security fixes.
August 6π
DesignerNew features π
You can now use the following new connectors in Snowflake projects. Both new connectors offer Incremental Load options.
- JDBC Load
- MariaDB Load
- We recommend using this connector instead of using the Database Query component to connect to MariaDB.
DesignerImprovements π§
Sampling data:
- You can now export the data displayed in the Sample data tab in Designer as a CSV file. For more information, read Sampling output.
August 1π
DesignerNew features π
Maia:
You can now enable and disable Maia features in your account settings. For more information, read Edit account settings.
Julyπ
July 31π
APINew Endpoints π
- New endpoints for creating and managing Streaming pipeline definitions have been added to the Data Productivity Cloud REST API. You can find a how-to guide for common tasks involving these endpoints here.
July 30π
StreamingImprovements π§
- Updated the Streaming agent version from
2.112.8to2.112.9. - Db2 for IBM i: add support for processing journal entries where a date is an empty string.
DesignerImprovements π§
For Databricks projects, once a Jobs Compute configuration is created, the dialog named after your configuration now displays key details at a glance, rather than just the Compute ID. Additionally, if an environment is associated with a Jobs Compute, its configuration is now shown directly in the environment list when selected. For more information, read Databricks Jobs Compute.
July 29π
StreamingImprovements π§
- Updated the Streaming agent version from
2.111.76to2.112.8. - Library updates and security fixes.
- Fixed an issue where a Streaming agent out-of-memory (OOM) error would not be reported as a pipeline event correctly.
July 25π
DesignerNew features π
AI components:
- A new Amazon OpenSearch Upsert component is now available. This component lets you upsert data converted into vector embeddings into an Amazon OpenSearch Service index using the Amazon OpenSearch Service API. Currently, this component only supports provisioned OpenSearch services, not serverless. Supported embedding providers include Amazon Bedrock and OpenAI.
APINew Endpoint π
- A new endpoint for accessing comprehensive audit logs over a period of time has been added to the Data Productivity Cloud REST API:
| Method | Endpoint | Description |
|---|---|---|
| GET | /v1/events |
Retrieves audit events in a given time range. |
July 24π
DesignerNew features π
Connectors:
Added the following connectors, which offer Incremental Load:
July 22π
DesignerNew features π
Metadata sampling:
- You can now use the Metadata tab to view the name, data type, and size of each column in a component's output. For more information, read Metadata.
Connectors:
Added the following Flex connectors for developing data pipelines:
July 18π
DesignerNew features π
Components:
- The X Ads Load component, which supersedes the X Ads component, is now available for Snowflake projects. This component lets you fetch data using the X Ads API.
July 11π
DesignerNew features π
Components:
- The Intersect component is now available for Databricks projects. This component lets you compare two datasets, and then return any rows that are identical in both datasets.
July 8π
DesignerNew features π
Git actions:
- When committing changes in Designer, you can now choose which files to include in each commit. For more information, read How to commit.
Data Productivity CloudNew features π
Amazon Redshift environments:
- When creating an Amazon Redshift environment, you can now choose whether to use the credentials associated with the agent you selected when creating your project, or to override these credentials by entering new credentials. For more information, read Specify cloud data warehouse credentials.
July 2π
DesignerNew features π
Components:
- Databricks Jobs Compute lets you run and/or schedule transformation pipelines using a Databricks Jobs Compute cluster.
Juneπ
June 27π
MaiaNew features π
Maia is now generally available in the Data Productivity Cloud.
- All Maia features in Designer, including support for both transformation and orchestration pipelines, multi-pipeline workflows, interactive tools, and natural language prompting, are now generally available (GA) in the Data Productivity Cloud.
June 26π
DesignerImprovements π§
Components:
- Grid Variables are now supported in the Calculator component.
June 24π
DesignerNew features π
Git:
- You can now search your commit history by commit message, timestamp, author, and hash to stay informed about all changes made in your branch.
APINew Endpoints π
- A new endpoint for an agent creation has been added to the Data Productivity Cloud REST API:
| Method | Endpoint | Description |
|---|---|---|
| POST | /v1/agents |
Creates a new agent using the specified configuration parameters. |
June 23π
DesignerNew features π
- Added a new auto-complete feature to some component properties, which makes it easier to include variables and warehouse functions in various fields, such as Calculator expressions or Send Email messages.
June 19π
MaiaNew features π
Maia is now live in the Data Productivity Cloud.
Maia in Designer:
- Maia is your always-on agentic data team, enabled via natural language prompting. You can collaborate with Maia directly in Designer to deliver data faster and to automate repetitive tasks.
- Maia supports both transformation and orchestration pipelines. Support for orchestration pipelines is currently in public preview.
- Maia can build, run, and update data pipelines for you, all via your natural language prompts.
- Maia can advise you about designing optimal data pipelines in Designer. Maia can recommend next steps in your pipeline flow and even clarify any queries you have about your pipelines and the components therein.
- Maia can explore the cloud data warehouse that you have connected to the environment in your Data Productivity Cloud project, examining data structures and relationships to help you create the pipelines that lead to the most value.
- Maia will suggest actions, including version control operations such as committing your branch's current state and pushing those commits to your remote repository.
- Maia uses tools to perform tasks in your Designer workspace based on your prompts. These tools let Maia go beyond chat and perform real, interactive actions.
- Maia works across multiple pipelines.
Connectors:
- Maia can create custom connectors for you. Maia works with any API that has publicly viewable documentation pages (ideally static HTML) that list and define available endpoints, methods, and parameters.
Observability dashboard:
- Maia can perform root cause analysis on a failed pipeline and will suggest steps you can take to get your pipeline working again.
- Maia can detect anomalies in your scheduled pipeline runs, such as sudden spikes in execution time. Maia does this by analyzing past scheduled pipelines, and will provide visual indicators for significant behavioral deviations.
DesignerNew features π
New connector:
- Added a new Salesforce Load connector, which offers Incremental Load options.
DesignerImprovements π§
Components:
- The JDBC Table Metadata to Grid component now supports an additional authentication method when the database type is set to Snowflake. Users can now choose Key Pair Authentication.
Note
Key pair authentication is the recommended authentication method, as Snowflake plans to block single-factor password authentication by November 2025.
June 18π
Data Productivity CloudNew features π
New connector:
- Added a new Oracle Unload from Snowflake connector, which enables you to unload data from your Snowflake data warehouse to an Oracle database.
June 16π
StreamingImprovements π§
- Updated the Streaming agent version from
2.111.64to2.111.76. - Library updates and security fixes.
- Oracle pipelines will now always include BLOB and CLOB data in change events emitted during the snapshot period, regardless of the
lob.enabledadvanced setting. - Fixed an issue where an Oracle connection could pass validation but fail to retrieve source information.
- Fixed an issue for SQL Server pipelines where the agent would incorrectly emit a snapshot error event during a snapshot.
June 11π
DesignerNew features π
Components:
- Create External Table is now available for Snowflake.
June 10π
Data Productivity CloudNew features π
- All automatic and export variables in Matillion ETL are now automatically migrated as system variables in the Data Productivity Cloud. For more information, read Migration: Export variables, and Migration: Automatic variables.
Components:
- Convert Type is now available for Amazon Redshift.
Data Productivity CloudImprovements π§
- New and improved filters on the pipeline runs of the pipeline observability dashboard. Users can now filter by Status, Project, and Environment.
Mayπ
May 29π
DesignerNew features π
- Monitor the health of your data pipelines with real-time Pipeline notifications. These notifications keep users informed about pipeline failures in projects and environments they can access. Alerts are sent via email for scheduled and API-triggered runs, but not for manual executions.
- The new View commit history Git action allows you to see the previous commits you have made in your branch. For more details, read View commit history.
AgentsNew features π
The Data Productivity Cloud is now available for running in your Snowflake account using Snowpark Container Services.
May 28π
Data Productivity CloudNew features π
Matillion has released a new navigation and Designer user experience, including the following improvements:
- New left navigation enables you to move between key areas of the Data Productivity Cloud from anywhere in the platform.
- Profile & Account icon in the left navigation provides quick access to user and account settings.
- Expanded Designer canvas reduces visual clutter, providing more space for pipeline creation.
- Resizable Files and Schemas panels can be docked or positioned anywhere on the canvas.
- Shorter load times in Designer ensure faster navigation and reduced latency.
May 23π
Data Productivity CloudNew features π
Schedules:
- When viewing schedules in the Your projects list, you can now filter the displayed schedules by their environment.
StreamingImprovements π§
- Updated the Streaming agent version from
2.111.47to2.111.64. - Reverted Snowflake JDBC driver version to
3.21.0. - Library updates and security fixes.
May 22π
DesignerNew features π
Secret definitions:
- You can now create secret definitions in Designer when configuring components, without needing to leave the canvas. For information about creating secret definitions, read Secrets and secret definitions.
Environments:
- You can now delete environments, provided that there are no active schedules running pipelines in the environment, and it is not the default environment for any branches. For more information, read Delete an environment.
Components:
- You can now set an optional Message field as part of the post-processor for any component. This means you can add a custom, descriptive message that clearly communicates what's happening at each step of your pipeline, tailored to your specific requirements.
May 19π
DesignerNew features π
Secret definitions:
- If you use a Full SaaS deployment model, you can now update the value of secret definitions, for example if a password changes. For more information, read Update a secret definition.
May 16π
StreamingNew features π
- A Streaming agent for Google Cloud Platform (GCP) is now available. This agent can be installed in GCP and used to run Streaming pipelines in the Data Productivity Cloud.
May 15π
StreamingImprovements π§
- Updated the Streaming agent version from
2.111.40to2.111.47. - Improved the pipeline activity reporting to ensure consistent consumption reporting, especially during times when the source database has limited changes going through.
- Library updates and security fixes.
DesignerImprovements π§
- The Unstructured Flex connector has been renamed to Unstructured.io. Two new POST endpoints have been added: Cancel Job and Run Workflow.
May 13π
DesignerNew features π
AI components:
- Added a Cortex Finetune component, which enables users to fine-tune large language models (LLMs) using Snowflake Cortex.
Components:
- Extract Nested Data is now available for Databricks.
DesignerImprovements π§
- A new data type called Variant has been added for Databricks in the Create Table component. This data type is supported in Databricks Runtime version 15.4 and above.
May 8π
DesignerNew features π
New connector:
- Added a new Google Sheets connector, which lets you query the Google Sheets API to retrieve data and load it into a table. You can then use transformation components to enrich and manage the data in permanent tables.
- The Google Sheets Query connector is no longer available for creating data pipelines, as it has been replaced by the new Google Sheets connector.
May 6π
StreamingImprovements π§
- Updated the Streaming agent version from
2.111.26to2.111.40. - Db2 for IBM i: Improve management of JDBC connections.
- Library updates and security fixes.
May 2π
AgentsNew features π
- Agents can now be put into a Paused state, which gracefully puts the agent into a safe state when you need to perform maintenance work such as upgrading the version or updating the configuration to add new drivers.
- We have introduced agent version tracks, with different release cadences. You can choose a version track based on how the agent update cadence fits with your operational practices, and how eager you are to take advantage of new features.
- Agents can be updated either automatically, as soon as a new release is available, or manually, at a time of your choosing. Read our support policy for details.
- A new agents public API lets you list agent details and also send commands such as
restartto the agent.
Aprilπ
April 28π
DesignerNew features π
New connector:
- Added a new Flex connector for developing data pipelines with data from Unstructured.io
April 24π
DesignerNew features π
New connector:
- Added a new connector for SAP NetWeaver that lets you connect to SAP to access available data sources. You can then use transformation components to enrich and manage the data in permanent tables.
Git actions
- Added a new View pull requests action that allows you to view the pull requests in your external Git repository if your team uses a pull request approval workflow. For more information, read View pull requests.
- After pushing changes to the remote branch, you can click the link in the success notification to create a pull request in your external Git repository.
Components:
- A new data type called Variant has been added for Databricks in the Convert Type transformation component. This data type is supported in Databricks Runtime version 15.4 and above.
DesignerImprovements π§
Components:
- The
S3 Attachmentparameter for the Send Email component has now been updated from astringto afile editor.
April 17π
AI components:
- The Google Vertex AI Prompt component is now available. This component uses a Google Gemini large language model (LLM) to provide responses to user-composed prompts. The component takes one or more inputs in the form of text, image, audio, or video, combines the inputs with user prompts, and sends this data to the LLM for processing.
April 16π
APINew Endpoint π
The following endpoint has been added to the Data Productivity Cloud REST API:
Secret references
| Method | Endpoint | Description |
|---|---|---|
| DELETE | /v1/projects/{projectId}/secret-references/{secretReferenceName} | Delete a secret reference |
April 11π
DesignerNew features πImprovements π§
- You can now view and resolve merge conflicts using the Merge from branch Git action. For more information, read Resolving merge conflicts.
- The
S3 Locationparameter in Send Email has been updated toS3 Attachment.
StreamingImprovements π§
- Updated the Streaming agent version from
2.111.18to2.111.26. - Db2 for IBM i: Add option to disable use of unique indexes.
- Db2 for IBM i: Default connection
date timeproperty toiso. - Added periodic diagnostic logging.
- Library updates and security fixes.
April 4π
StreamingImprovements π§
- Updated the Streaming agent version from
2.111.6to2.111.18. - Db2 for IBM i: Set the default batch size value to 512, and fixed an issue where an incorrect journal receiver library could be used.
- Library updates and security fixes.
April 2π
DesignerNew features π
Components:
-
Copy Grants: In Snowflake, the COPY GRANTS clause transfers privileges from the original object such as a table, view, schema, or database, to a new object when replacing or cloning, ensuring consistent access control and eliminating manual reassignment. The COPY GRANT clause information has been added to the following documentation:
-
Transient table type: The Create Table component for Snowflake now supports the new table type, Transient. This table type holds data indefinitely but can't be restored.
- Attach a file using the Send Email component. The attachment must be a single file located in an S3 bucket.
Marchπ
March 27π
DesignerNew features π
Schedules:
- Added a new Allow concurrent schedule runs option when creating a schedule that lets you choose whether a scheduled pipeline run is skipped if the previous scheduled run is still in progress.
March 25π
DesignerNew features π
Connectors:
- Added a Coalesce Flex connector for developing pipelines.
StreamingImprovements π§
- Updated the Streaming agent version from
2.110.4to2.111.6. This is the minimum agent version required for the new snapshot feature. - New snapshot feature introduces recoverability and flexibility. This replaces initial and on-demand snapshot options.
- New dashboard layout makes it easier to navigate to relevant information.
- Upgrade of the IBMi DB2 JDBC driver in the Agent.
- Library updates and security fixes.
March 21π
DesignerNew features π
Connectors:
- Added a Zoom Flex connector for developing pipelines.
March 20π
DesignerNew features π
AI:
- Added an ML Classification component, which uses the Snowflake Classification machine learning (ML) function to sort data into different classes using patterns detected in training data. This component is available in public preview.
March 14π
DesignerNew features π
- New and updated roles and permissions for projects and environments have been introduced:
Owner,Contributor,Viewer, andNone, replacingAdmin,User,Read Only, andNo Access, respectively. Additionally, a new environment role,Runner, has been introduced. All roles and permissions are documented in the following.
New connector:
- X Ads, which lets you query the X Ads (formerly Twitter Ads) API to retrieve data and load it into a table. You can then use transformation components to enrich and manage the data in permanent tables.
Maia (formerly Copilot):
- Maia can now write your Git Commit messages for you using the Conventional Commits format.
March 13π
DesignerNew features π
AI components:
- Add a Cortex Multi Prompt component, which uses Snowflake Cortex to receive a prompt and then generate multiple responses (completions) using your chosen supported language model. This component is available in public preview.
CDCImprovements π§
- Updates to the CDC agent version.
- From
2.108.1to2.110.3. - From
2.110.3to2.110.4.
- From
- Reverted the Snowflake JDBC driver to
3.22.0to mitigate an issue where the first Snowflake request on an agent could fail. - When using a storage target or Snowflake with the "Source Database & Schema" prefix, setting the database value is now
Rootwhere previously the Db2 system name would have been used. - Library updates and security fixes.
March 5π
DesignerImprovements π§
-
System variables and post-processing are now generally available:
- System Variables can now be used in component parameters, Python, Bash, and SQL scripts, in addition to post-processing.
- Users can access shared and public scalar variables from a child pipeline using the post-processing Update Scalar property in Run Transformation, Run Orchestration, and Run Shared Pipeline components.
- System variables are read-only and cannot be modified manually. Instead, system variable values are updated automatically during component executions.
- System variables can be directly used in Python Pushdown, manually assigned in Bash Pushdown, and used inline in SQL script (orchestration) and SQL (transformation) components, with mapping required for external SQL.
- New child pipeline system variable syntax is
${sysvar.childPipeline.vars.<varname>}and allows referencing variables from child pipelines, e.g.,${sysvar.childPipeline.vars.var1}.
-
Lineage now includes orchestration pipelines, providing a comprehensive view of data origins. This expansion introduces support for additional connectors and offers key benefits such as audit and compliance, impact analysis, and faster debugging.
Februaryπ
February 28π
DesignerImprovements π§
- Added support for Snowflake "Named" stages to third-party connectors (Asana, GitHub, Mailchimp, etc.).
- Added support for Snowflake storage integration stage access strategies when using AWS or Azure as the staging cloud platform with third-party connectors (Asana, GitHub, Mailchimp, etc.).
February 26π
CDCImprovements π§
- Updated the CDC agent version from
2.107.4to2.108.1. - Library updates and security fixes.
- Fixed an issue where the agent could not compact legacy history.dat files.
February 25π
Data Productivity CloudImprovements π§
- Updated account admin privileges so that account admins can enable and disable the ability for each user to create projects within an account.
February 20π
DesignerImprovements π§
- Updated the SQL Script component for Snowflake and Databricks projects. You can now choose whether to specify which project and pipeline variables to declare as SQL variables upon component execution, or to declare all as SQL variables. By default, no project or pipeline variables are declared as SQL variables prior to an SQL Script component execution.
- Updated the OpenAI Prompt, Azure OpenAI Prompt, and Amazon Bedrock Prompt components to add an Append setting to the Create Table Options property.
February 18π
StreamingImprovements π§New features π
- Updates to Pre-built pipelines. Amazon Redshift now supports these types of pipelines.
February 17π
CDCImprovements π§
- Updated the CDC agent version from
2.102.27to2.107.4. - Updated the matillion.compact-history default value to
true. - Library updates and security fixes.
February 13π
AgentsNew features π
- Hydrid SaaS agents running on AWS can now be connected via AWS PrivateLink, an AWS service that allows you to connect via a secure, private connection. Using AWS PrivateLink, no traffic is exposed to the public Internet when it travels between the Data Productivity Cloud and your own AWS virtual private cloud.
February 11π
APINew Endpoints π
The following new endpoints have been added to the Data Productivity Cloud Flex connector:
- List All Schedules
- Create Schedule
- Get Schedule
- List Artifacts
- Get Artifact
- Promote Artifact
- Pipeline Executions
- List Custom Connectors
- List Flex Connectors
- List All Secret References
- Create Secret Reference
February 7π
DesignerImprovements π§
- Workday and Workday Custom Reports now support OAuth 2.0 Authorization Code Grant authentication. For more information about creating an OAuth connection for these components, read the Workday authentication guide.
February 6π
DesignerNew features πImprovements π§
Agents:
- Added the ability to restart a Hybrid SaaS agent from within the Data Productivity Cloud.
New orchestration components:
Database transactions for Snowflake and Amazon Redshift allow multiple database changes to be executed as a single, logical unit of work. With the introduction of database transactions in the Data Productivity Cloud, the following orchestration components have been added:
- The Begin component starts a new transaction in the database.
- The Commit component completes a transaction, making all changes since the most recent Begin component visible to other users.
- The Rollback component cancels a transaction, undoing all changes made since the most recent Begin component. These changes remain invisible to other users.
New connector:
- Added the LinkedIn Ads Flex connector for developing data pipelines.
February 4π
DesignerImprovements π§
Components:
- Improved the dbt Core component by adding two new properties:
- dbt Project Location: Use this property to clarify whether the dbt project is located in an external Git repository that is not connected to your Data Productivity Cloud project, or the dbt project is already hosted in the Git repository connected to your Data Productivity Cloud project.
- dbt Project: This property will list any dbt projects that reside in the Git repository connected to your Data Productivity Cloud project. A directory is a dbt project when it includes a
dbt_project.ymlfile.
February 3π
APINew Endpoints π
The following endpoints have been added to the Data Productivity Cloud REST API:
Schedules
| Method | Endpoint | Description |
|---|---|---|
| GET | /v1/projects/{projectId}/schedules | List all schedules for a project |
| POST | /v1/projects/{projectId}/schedules | Create a new schedule |
| DELETE | /v1/projects/{projectId}/schedules/{scheduleId} | Deletes the schedule by the given schedule ID |
| GET | /v1/projects/{projectId}/schedules/{scheduleId} | Get a schedule summary for a given schedule ID |
| PATCH | /v1/projects/{projectId}/schedules/{scheduleId} | Update the schedule by the given schedule ID and schedule request |
Artifacts
| Method | Endpoint | Description |
|---|---|---|
| GET | /v1/projects/{projectId}/artifacts | Get a list of artifacts |
| PATCH | /v1/projects/{projectId}/artifacts | Enable or disable an artifact |
| POST | /v1/projects/{projectId}/artifacts | Create an artifact |
| GET | /v1/projects/{projectId}/artifacts/details | Get an artifact by a given version name |
| POST | /v1/projects/{projectId}/artifacts/promotions | Promote an artifact to a specific environment |
Connectors
| Method | Endpoint | Description |
|---|---|---|
| GET | /v1/custom-connectors | Lists custom connector profiles for the requesting account |
| GET | /v1/flex-connectors | Lists Flex connector profiles |
Secret References
| Method | Endpoint | Description |
|---|---|---|
| GET | /v1/projects/{projectId/secret-references} | List all secret references |
| POST | /v1/projects/{projectId/secret-references}/{secretReferenceName} | Create a secret reference |
Januaryπ
January 29π
DesignerImprovements π§
Orchestration:
- The S3 Unload component is now available for Databricks projects.
- The Azure Blob Storage Unload component is now available for Databricks projects.
Transformation:
- The Generate Sequence component is now available for Databricks projects.
CDCImprovements π§
- Updated the CDC agent version from
2.102.1to2.102.27. - Library updates and security fixes.
January 27π
DesignerImprovements π§
Components:
- Updated Flex and custom connectors to include a
Load Selected Dataparameter. This setting lets you choose whether to return the entire payload (default) or only selected data objects in your API response. - Updated the Python Pushdown component. When you set a Python version, the
Packagesparameter is updated to show packages supported by that Python version. Currently the Python Pushdown component supports Python versions 3.9, 3.10 (default), 3.11, and 3.12.
January 23π
DesignerImprovements π§
Designer UI:
- Updated the command palette to include the following action commands, all of which can be activated by typing
SHIFT+>:- Run Pipeline
- Validate Pipeline
- Canvas: Add component
- Canvas: Zoom in
- Canvas: Zoom out
- Canvas: Zoom to fit
To access the command palette, type CMD + k or CTRL + k or click the magnifying glass button in the upper-right of the UI. Type CMD + SHIFT + k or CTRL + SHIFT + k to open the action commands list directly.
Components:
- Updated the Database Query component to include a
Fetch Sizeparameter, which lets you specify the batch size of rows to fetch at a time, for example, 500.
January 22π
DesignerNew features π
DataOps:
- Artifacts can now be created to ensure that the version of a pipeline that you are releasing to production is the same version that you have tested in your environments. An artifact is an immutable collection of resources (such as pipelines and script files) that is deployed to your chosen environment when you publish. The following guides contain more information about artifacts and how they relate to schedules:
- The process of pushing and publishing local changes using Git push has been changed. For more details about how to push and publish your local changes to create a schedule, read Git push.
- For more information on how the Data Productivity Cloud can help you to take an innovative approach to DataOps, read DataOps in the Data Productivity Cloud.
Networks:
- Customers using a Full SaaS Data Productivity Cloud solution can use the Database Query and RDS Query components to access data sources within their infrastructure using a configured SSH tunnel. This is accessed via the new Networks tab. Read our Networks guide to get started.
January 20π
DesignerNew features π
Designer:
- System variables are now available in public preview. System variables provide component execution metadata, such as row count, execution duration, and more.
- Post-processing is now available in public preview. Each orchestration pipeline component now includes a Post-processing tab, where you can update scalar variables and map your existing user-defined pipeline variables and project variables to system variables.
January 16π
DesignerNew features π
Migration tool:
- The migration tool for any current Matillion ETL users who wish to migrate their data pipelines to the Data Productivity Cloud is now generally available. Migrating from Matillion ETL to the Data Productivity Cloud is a complex process, and we urge you to read all of the documentation listed here:
- Migration considerations focuses on what a migration will involve and the steps required before getting started.
- Migration feature parity details the current differences between Matillion ETL and the Data Productivity Cloud that you should consider.
- Migration process explains how to export your jobs from Matillion ETL and import them to the Data Productivity Cloud.
- Migration mappings explains how you can resolve issues when migrating shared jobs in Matillion ETL to the shared pipelines feature in the Data Productivity Cloud.
Data Productivity CloudNew features π
- Lineage filtering is now available for customers who want to enhance the clarity of data flows in their Data Productivity Cloud pipelines.
January 14π
DesignerNew features π
Orchestration:
- The JDBC Table Metadata to Grid component is now available for Databricks projects.
Data Productivity CloudNew features π
- The Super Admin role is now available in the Data Productivity Cloud for new and existing accounts. This new role gives users with the role access to everything in that accountβincluding all users, projects, environments, and more. For all new accounts, this is applied automatically. For more information, read Registration. For existing accounts, submit a support ticket with the account number and the user's email address to request the role assignment.
CDCImprovements π§
- Updated the CDC agent version from
2.101.2to2.102.1. - Improved logging around the compaction of a pipeline's schema history.
- Library updates and security fixes.
January 8π
DesignerNew features π
Code Editor:
- Added a new feature, Code Editor, to Designer. Code Editor introduces an improved high code experience to the Data Productivity Cloud. Code Editor is powered by the Monaco Editor, which also powers Visual Studio Code and other editors across the web.
- Added the ability to create .sql and .py files in Designer.
- Added the ability to edit files in Code Editor.
- Improved the SQL Script and Python Pushdown components by adding a Script Location property. With this property, you can decide whether to run an SQL or Python script from directly within the component (current behaviour) or choose to run an existing .sql or .py file in your project instead. Any .sql and .py files in the repository you have connected to your project can be edited using Code Editor and run within SQL Script or Python Pushdown components.
Note
Python Pushdown is currently only available for Snowflake projects.