Skip to content

2022 changelogπŸ”—

Here you'll find the 2022 changelog for the Data Productivity Cloud. Just want to read about new features? Read our New Features blog.


DecemberπŸ”—

December 7πŸ”—

BatchImprovements πŸ”§

  • When setting up a batch pipeline, you can now read source-specific guidance in Data Loader's contextual help panel. Once you select a source, click Help on the right-hand side of the UI.

CDCImprovements πŸ”§

  • You can now validate your pipeline setup at the end of the creation flow, and from the pipeline details page for existing pipelines.

NovemberπŸ”—

November 8πŸ”—

BatchNew features πŸŽ‰


OctoberπŸ”—

October 26πŸ”—

CDCNew features πŸŽ‰Improvements πŸ”§

  • Private preview has begun for IBM DB2 iSeries as a CDC data source. Please contact Matillion Support if you wish to participate.
  • Added a Read Replica UI option to the SQL Server CDC Pipeline setup that allows users to specify the use of an "Always On" read-only replica database.
  • Added a new Snapshotting UI option to disable full snapshots being taken when a CDC Pipeline is started.
  • Source connector stability improvements.

Note

Recent MS SQL Server connector updates mean encryption is now enabled by default. See the MS SQL Advanced Settings documentation for more information.


October 19πŸ”—

CDCNew features πŸŽ‰Improvements πŸ”§

  • Added a CDC pipeline details page to Data Loader.
  • You no longer need an Enterprise edition subscription to set up an agent. This can be done prior to starting your trial.

SeptemberπŸ”—

September 28πŸ”—

CDCNew features πŸŽ‰

  • Added support for multiple schemas, which enables CDC pipelines to read from numerous schemas and tables at once. You can choose any schemas and tables you want to include in the pipeline depending on the data source.
  • A new feature for Streaming agent versioning has been introduced, in which the presence of a red agent version number indicates an out-of-date agent version and prevents you from adding a pipeline to the agent. To add a new pipeline, you must upgrade your agent. Any currently running pipelines will keep working, but the user is prompted to upgrade the agent.

September 22πŸ”—

CDCImprovements πŸ”§

  • Fixed an issue where tables that started with a numeric characterβ€”or contained a dash characterβ€”would cause the Avro file generated to be non-compliant with the specification. This in turn would result in a file that couldn't be imported. Data Loader CDC now sanitizes table names with the following pattern:
    • Any non-alphanumeric character is replaced by an underscore.
    • If the leading character is numeric, prefix the table name with an underscore.

If you have encountered issues because of this error, you'll need to clear out your cloud storage location to remove the non-compliant files. If you haven't encountered this error, your agent upgrade doesn't require this remediation step.


AugustπŸ”—

August 31πŸ”—

Data LoaderNew features πŸŽ‰

  • Added the sidebar filter. Users can now filter sources by destination and by load data type (CDC or Batch pipeline) on the Choose Source page.

CDCNew features πŸŽ‰


August 10πŸ”—

CDCNew features πŸŽ‰

  • Added MySQL as a new source for CDC pipelines in Data Loader. For a detailed description, read MySQL.

JuneπŸ”—

June 6πŸ”—

CDCImprovements πŸ”§

  • Users can remove an existing agent from the Agent dashboard. For a detailed description of the Agent Dashboard page, see Agent Setup UI.
  • Users can edit the agent detail in the Agent dashboard and update them. For a detailed description of the Agent Dashboard page, see Agent Setup UI.

MayπŸ”—

May 5πŸ”—

BatchNew features πŸŽ‰

Data Productivity CloudNew features πŸŽ‰

Data Loader adds several new capabilities:

  • Billing integration has been added for Batch and CDC data processing. Users will be charged through the Data Productivity Cloud based on their usage. For more information, visit our pricing page.

CDCNew features πŸŽ‰

  • CDC is available for Oracle, PostgreSQL, and MS SQL sources. CDC data is stored in Amazon S3 or Azure Blob Storage.
  • Matillion ETL Shared Jobs are available to ingest CDC data from Amazon S3 or Azure Blob Storage, transform the data, and load the data into Snowflake, Amazon Redshift, or Delta Lake on Databricks as final destinations. These configurable jobs can be scheduled to run in sync with your Streaming agent process once you have set up your CDC pipelines.
  • CDC pipelines can handle schema drift as change events in your source.

AprilπŸ”—

April 1πŸ”—

Introducing Data Loader 2.0

  • A new, streamlined user interface for wizard-style creation and management of both Batch and change data capture (CDC) pipelines.
  • Administration functions for saving cloud data warehouse destinations and storing passwords, cloud credentials, and OAuths for easier and faster pipeline creation.
  • The ability to launch Data Loader straight from the Data Productivity Cloud.

BatchNew features πŸŽ‰

  • Support for the following Batch pipeline data sources:
    • Amazon Redshift
    • Excel
    • Google BigQuery
    • Google Sheets
    • MariaDB
    • Microsoft SQL Server
    • MySQL
    • Oracle
    • PostgreSQL
    • Salesforce
    • Snowflake
  • Support for the following Batch pipeline destinations:
    • Snowflake
    • Amazon Redshift
    • Google BigQuery

CDCNew features πŸŽ‰

CDC Pipelines created within Data Loader require the use of Streaming agents (containers).

  • Streaming agent installation templates provided, simplify the installation and configuration into supporting on-premise or AWS and Azure cloud environments.
  • Use of Secrets Management applications to host password credentials further secure access to sources.
  • Support for the following CDC pipeline data sources:
    • Oracle
    • PostgreSQL
    • MS SQL
  • Support for the following CDC pipeline destinations:
    • Amazon S3
    • Azure Blob Containers