What's new in the Data Productivity Cloud?

Want to see the raw changes? Check out the Changelog

Amazon Redshift in the Data Productivity Cloud? Check!

DesignerNew features πŸŽ‰

Calling all Amazon Redshift users! If Redshift is your cloud data warehouse of choice, it's time to jump on board. The Data Productivity Cloud now welcomes you to build and manage pipelines at scale.

"Knowing that users of Amazon Redshift can now take advantage of the Matillion Data Productivity Cloud has made my week!"

β€” Jamie Cole, Senior Product Manager at Matillion

Select Amazon Redshift as your data platform!

Logging for connectors in Custom Connectors and Designer

You can now set the logging level when working with custom connectorsβ€”both in the Custom Connector interface and in Designer. Choose from Error, Warn, Info, Trace, and Debug to set your log information level when testing, validating, or running your connectors.

In Custom Connectors, when editing your custom connector, send a request to your configured API endpoint, then click the Logs tab to see the log details associated with your request.

To see logs in Designer, first set the log level of the connector component on your canvas by going to Advanced Settings in the Properties panel. In the Log Level property, select a level, such as Trace. Then run the pipeline and check the logging info in the Task History's Message column.

Feature spotlight… A multitude of new connectors launched

DesignerNew features πŸŽ‰

We've been quiet about connectors for a few weeks, but it's time for an update on all the ones our teams have developed recently. You've now got nine, count 'em NINE, new easy ways to load data!

In March, we added the Databricks connector, which uses the Databricks API to retrieve data.

Last month and this month, we added the following Flex connectors for developing data pipelines:

Remember, you can use Flex connectors in their preconfigured state, or you can edit them by adding or amending available endpoints as per your use case.

Nine new connectors!

Feature spotlight... Connect projects to your own GitHub repository

DesignerNew features πŸŽ‰

With the Matillion GitHub app, you can connect your GitHub repositories to your Data Productivity Cloud projects. The app solution provides high security, granting access only to the repositories and actions required for the Data Productivity Cloud to store and retrieve data pipelines corresponding to your GitHub account.

Keep in mind that your GitHub repo and user permissions may affect how users interact with the connected Data Productivity Cloud project!

Once you've installed the app in your GitHub account, you can create a new GitHub repository, connect an existing one, and view the list of repositories that the Data Productivity Cloud has access to. When you connect an existing repository, any active branches will be cloned into Designer and selectable from the Branches tab.

Matillion GitHub app

For all the details, see the documentation, including the prerequisites.

Here's a test you'll want to ace

DesignerNew features πŸŽ‰

Fancy adding a shiny new trophy to your mantel? We have a new challenge for you! Become an accredited Matillion expert by demonstrating strong core competency in the Data Productivity Cloud. We're happy to announce the Data Productivity Cloud Foundations certification.

What's better than earning this badge of success and having your skills recognized? The satisfaction of consolidating your learning and proving to yourself that you know your stuff.

The exam consists of 40 multiple-choice questions that must be answered within one hour. The subject matter crosses all areas of the Data Productivity Cloud, from the Designer and components to agents and administration.

Before scheduling your exam, make sure to check out the practice exams and the study guide, so you can feel confident before jumping in.

Then when you're ready, go here to find out more and get started.

_Data Productivity Cloud Foundations_ certification

Pipeline portability is no April Fool's joke… Export/import is really here!

DesignerNew features πŸŽ‰

Here's a nice boost for your data productivity: you can now export and import your pipelines between projects. And this new feature is just as easy to use as you would expect it to beβ€”just click the ellipsis on a pipeline in your pipeline folder tree.

You can also export and import an entire pipeline folder, including all the pipelines and subfolders it contains, or export and import the entire project including all folders and pipelines within it.

It's so useful and yet so simple; let's get sharing!

"With our new import and export capability, data engineers can easily reuse and adapt code from existing projects, saving time and effort when building new pipelines in Matillion. In addition, the ability to share logic with colleagues promotes knowledge exchange, collaboration and, as organizations scale, offers the ability to build collections of templates and boilerplate projects, boosting productivity when getting started on new use cases."

β€” Sarah Waters, Principal Product Owner at Matillion

Export/import pipelines

Read the documentation for more details.

Harness the power of LLMs right inside Designer!

DesignerNew features πŸŽ‰

Ready for AI-augmented data engineering? Have we got components for you!

  • Open AI Prompt
  • Azure OpenAI Prompt
  • Amazon Bedrock Prompt
  • Pinecone Vector Upsert
  • Pinecone Vector Query

Our new prompt components use large language models (LLMs) to provide responses to user-composed prompts. The components take inputs from your source table, combine those inputs with user prompts, and then send this data to the LLMs for processing. The components even let you augment your queries to the LLM using retrieval augmented generation (RAG).

Speaking of RAG, the Pinecone Vector Upsert component lets you convert data stored in your cloud data warehouse into embeddings, and then store the embeddings as vectors in your Pinecone vector database. The vectors are then available to use in retrieval augmented generation through one of our AI prompt components (or in other projects).

You can use the Pinecone Vector Query component to ingest text search strings from a table and return text data associated with similar vectors in your Pinecone vector database. This means your AI prompt components can benefit from multiple RAG sources at once!

"Our new AI Prompt components enable data engineers to harness the power of prompt engineering when designing data pipelines in the Data Productivity Cloud. This helps to transform data in an unprecedented fashion, using natural language as the main input. It's as easy as defining transformations using the language of the business through prompts, in low-code, graphical components. Anyone can start integrating the reasoning capabilities of LLMs to transform and enrich data, and take data processing to the next level. For instance, turning unstructured data into structured content is now a breeze: you can now make the data business-ready faster with AI thanks to these new components. What was previously the realm of tech-savvy data scientists can now be done at scale by data engineers."

β€” Cyril Sonnefraud, Principal Product Manager at Matillion

You'll find all the new AI components in the Components tab of the Designer; just click the AI filter.

AI Prompt components in Designer

For specifics on each component, see the documentation, and make sure to check out the videos for hands-on configuration guidance:

AI Note is now in general availability

It's also the perfect time to mention that our AI Note feature is now in GA. When it's this easy, there's no excuse not to annotate your data pipelines with explanations helpful to collaborators. On the canvas, just select the components you want to document, right-click and select Add note using AI. Happy with the note generated? If you think it could be better, click Refine. Then, add more detail by clicking Elaborate, or reduce the detail by clicking Shorten.

Read the documentation and video for more information.

Data lineage makes its grand entrance

DesignerNew features πŸŽ‰

No more mysteries about what went wrong or right during your data transformations! Matillion's new data lineage functionality is currently in public preview in transformation pipelines, with orchestration lineage coming next down the line. You can now track an entire pipeline from source to target, allowing you to see the flows of data, and better understand relationships and transformations. Data lineage in Designer provides:

  • Transformation lineage at runtime.
  • Table-level lineage.
  • Column-level lineage.
  • Visibility into joins where tables are connected.
  • Table lists.
  • Table metadataβ€”column information and data types.
  • Metadata about a pipeline run (when it ran, how long it took, who ran the pipeline, and success/fail).

Whether you're interested in confirming data quality, ensuring compliance, improving troubleshooting, or just efficient data processing, data lineage in Designer gives you a massive new window looking into the flow of data through your pipelines.

"We're transforming our observability capabilities with integrated lineage, providing end-to-end visibility and traceability of data journeys to empower organisations with greater insights, accountability, and efficiency in their data integration processes."

β€” Lee Power, Senior Product Manager at Matillion

Data lineage in Designer

To get started, from the main menu click "Manage", then "Pipeline Runs", and finally "Lineage" to see data lineage for your recent transformation runs. You may need to run your transformations again to see them in lineage.

See the documentation for all the details.

Text mode now in Designer for components and grid variables

Dear customers, we love you and we love to hear what you want most. That's why we're so excited to announce that you now have the ability to enter column information and grid variable fields in text mode. It's so much faster now to add information to a property dialog. Instead of typing or selecting values in the individual fields of the dialog, you can click the Text mode toggle to open a multi-line editor that lets you add all the items in a single block.

What about error handling? Well, after you've entered details in text mode, you switch out of text mode to verify the details are valid before you continue. Don't worry: text mode will tell you if you mistyped!

You can use this feature to edit the fields of an existing grid variable or Columns dialog, regardless of whether the columns were originally completed in text mode or not. Perhaps most satisfying: text mode allows you to paste in values from other text sources. You can also rapidly copy the fields from a completed grid variable or dialog, enabling easy duplication of properties between or within components and grid variables.

For details, see Components overview and Creating grid variables.

And we can't forget to mention our new Flex connectors!

More exciting news: we've just added these Flex connectors for developing data pipelines:

Have you seen the new UI?

DesignerNew features πŸŽ‰

The Designer's new UI launched this week, with changes that enable easier configuration of components in a new, vertical panel. The redesign also features a more economical use of real estate, with access to panels on an as-needed basis. The component configuration panel appears when a component is selected or added to the canvas. The Pipelines, Components, Schemas, and Variables panels are out of the way until you click to expand them outward.

"The previous layout presented a significant user experience challenge due to limited vertical space for the component configuration panel. This often resulted in hidden parameters and wasted time spent resizing the panel. The new design addresses this by providing ample vertical space, complete with an intuitive scroll to accommodate complex configurations. This, in turn, allows for a dedicated and easily accessible Sample data tab at the bottom."

β€” Amie Wilson, Senior Product Designer at Matillion

New UI

Plus… the Query Result to Grid component is here!

Now in public preview, the Query Result To Grid component allows you to query a table and return rows of data that are loaded into a predefined grid variable for use elsewhere in the pipeline. You can set up a simple query via the component interface, or write your own SQL query.

This component is the grid equivalent of the Query Result To Scalar component.

The Query Result To Grid component is enabled initially for Snowflake and Redshift warehouses.

And more connectors!

We're happy to also announce the following new Flex connectors for developing data pipelines:

You can edit these Flex connectors through the Custom Connector feature to add your own endpoints and more.

We've also added Azure Blob Storage as a source and target for the Data Transfer component.

Grid variables are here!

DesignerNew features πŸŽ‰

Use Grid Variables? Yes please! With the exciting addition of grid variables, the Data Productivity Cloud now has a new type of pipeline variable. A grid variable is a two-dimensional array that holds multiple values in named columns. With grid variables, you can pass lists of data as dynamic values into a component's properties.

In query components, you can select Use Grid Variable when selecting the columns you want from your data source. You'll also want to use grid variables for passing arrays of data for use in Python scripts.

Default values are set or changed dynamically at runtime, so grid variables enable you to populate table metadata and pass variables between pipelines, from the parent pipeline to the child pipeline it is running.

With this new support for grid variables in pipelines, use our new Grid Iterator component to loop over the rows of your grid variables.

Grid Variables

Azure features

We've added the Azure SQL component, so now you can run an SQL query on an Azure SQL database and copy the results to a table. You can also now search your Azure Blob Storage, as this new data input type has been added for the File Iterator component.

New Mailchimp component and Flex connector for Mixpanel

Our new Mailchimp component lets you query the Mailchimp API to retrieve data and load it into a table. You can then use transformation components to enrich and manage the data in permanent tables. And you'll now find the Mixpanel Flex connector in Designer.

Use your dbt models directly inside Designer!

DesignerNew features πŸŽ‰

Giving you more efficiency and control of your data transformation, our new dbt Core component is now in public preview. This component lets you add dbt models from a Git repository into your Matillion orchestration pipeline.

Using the dbt run command via the dbt Core component, you can run dbt models stored in the Git repository you have synced with. You can also debug connections and projects, compile projects, and execute tests using the debug, compile, and test commands.

We also support other commands such as build, clone, seed, show, list, and parse. See our video on this great new component for more information.

Full SaaS is better than ever with Python Pushdown

DesignerNew features πŸŽ‰

We're thrilled to announce the ability to incorporate Python scripts into your pipelines with the new Python Pushdown component to Designer. This component lets you execute Python scripts using the Snowpark service in your Snowflake account, bringing high-code capabilities to Full SaaS solutions in the Data Productivity Cloud.

New connectors in January

Data loading made easy

We've also made scheduling pipelines easier by introducing two modes: standard and advanced. The new standard mode lets you set the schedule in simple terms of Days, Weeks, Hours, and Minutes, rather than writing a Cron expressionβ€”which is required by the advanced mode.

And as always, we've added a new connector! Check out the Intercom flex connector to easily load Intercom data to your cloud data warehouse.

AI notes are here!

DesignerNew features πŸŽ‰Improvements πŸ”§

Automatically document pipelines with AI Notes

Adding notes can be a great way to document your work and make your team's life a lot easier. But why not sit back and let GenAI do the work for you?

Introducing AI notes, a one-click solution to start documenting your pipelines. Simply select the component(s) you want to document, right click and select Add note using AI. Notes can be regenerated or added to the canvas for later use or editing.

For more information, visit the documentation for this wonderful feature.

Connector improvements

It's another great week for connectors. Google Cloud Storage has been added as a direct-to-storage option to all Flex connectors in the Designer. This means you can load data from your service of choice directly into Google Cloud Storage with a single component.

Make sure to add your GCP credentials to the Cloud provider credentials menu before setting your components Destination property to Cloud Storage. We've also added a new SendGrid Flex connector and added REST API service support to the SharePoint Query connector!

Do more with the Data Productivity Cloud API!

DesignerNew features πŸŽ‰

Pipeline API

We're thrilled to announce the release of the Data Productivity Cloud's first public API. This collection allows you to check your projects, pipelines and their status. You can execute pipelines via the API for easier automation and terminate running pipelines, just in case.

You can check out our API reference for these endpoints or get started with the API in the documentation. But why not give our video guide a watch first to cover the basics?

Low code variables

Want to enjoy the flexibility of implementing variables into your jobs but don't want to mess around with the Python Script component? We've implemented a couple of new low-code components that will help you out.

This helps you build more flexible pipelines that report on their own progress for easier debugging and monitoring.

Shopify connector

And of course we've added a new connector! Shopify is available in the Designer right now and can also be extended from the Custom Connector menu. Enjoy!

Azure Blob Storage is here!

DesignerNew features πŸŽ‰

Land your data straight into Azure Blob Storage

This week we're extremely pleased to announce that Azure Blob Storage has been added as a staging option to all connectors in the Designer.

To get plugged in, simply add your Azure credentials to the Cloud provider credentials menu, then select a connector that has been added to your pipeline and select the Stage Platform property. From there, select Existing Azure Blob Storage Location to select from your Blob Storage containers.

For Flex connectors, users can select Cloud Storage as a Destination and then select Azure Blob Storage from there.

Happy loading!

Recent Pipelines

We know what it's like to have a lot of projects and branches, so we've added a "Recent Pipelines" section to the front Hub page that fast-tracks you straight to the designer to easily access your most recently-visited pipelines.

Zendesk Talk

Finally, it wouldn't be a release without another connector dropping, and we've added the Zendesk Talk - just add it to your pipeline canvas to get connected. Remember to visit the Custom Connector menu to set up your Zendesk Talk configuration and even add new endpoints as desired.

Connect to Square and put your query results to good use

DesignerNew features πŸŽ‰

Square connector

This week we've added the much-requested Square connector. This is a Flex connector so you can edit it through the Custom Connector feature to add your own endpoints and more.

Query Result to Scalar

We're also happy to announce the new Query Result to Scalar component that lets you write a custom SQL query and map its scalar output to a variable. Variables can be used in other components or even passed to Transformation pipelines.

Query Result to Scalar means you can have jobs dynamically react to live data and act accordingly. The component also comes with a basic mode so you can build a query without requiring any SQL at all!

Scheduling improvements, connectors and more!

BatchNew features πŸŽ‰Improvements πŸ”§CDC

Scheduling improvements

Batch pipelines allow you to quickly grab as many sources as you like from your service and load them into a data warehouse... but the real power comes from scheduling these to run more frequently! We've added deeper support for Quartz cron expressions so you can have total control over your schedule frequencies.

Cron expression scheduling

New connectors including Workday support

And as usual we're committed to getting your favorite services integrated natively with the Data Productivity Cloud so you can find the following connector components in Designer, today:

Soft deletes for Snowflake CDC

Earlier this quarter, we released direct-to-Snowflake for CDC pipelines which allows you to replicate your data directly to your Snowflake data warehouse. We're now expanding that with a new "Copy Table with Soft Deletes" feature. This allows you to produce a replica in Snowflake of each captured source table, but rows deleted in the source are not deleted in the target table. As well as each column mapped from the corresponding source table, an additional boolean column called MTLN_CDC_DELETED is included that tells you whether that row is deleted in the source or not.

See the CDC Snowflake Destination documentation for more information.

Connectors keep coming!

DesignerNew features πŸŽ‰

We're committed to making your data loading journey as simple as possible and we're backing that up with yet more native integrations this week. Check out the following list of connector components available in the Designer today:

We're also happy to announce Azure Storage as a storage destination for many of our connectors, letting you land your data directly into Blob storage. We're starting with the following connectors, with many more on their way! Here we go...

ActiveCampaign, Amplitude, Anaplan, Braze, Brevo, Chargebee, CircleCI, Concord, Confluence, Datadog, Delighted, Eventbrite, Freshdesk, Gong, Klaviyo, LaunchDarkly, PagerDuty, PayPal, Pendo, Productboard, Recurly, Slack, Smartsheet, Snyk, TikTok, Toggl, and Twilio!

Azure storage

A better way to build pipelines

A new week, and new improvements in the Data Productivity Cloud's pipeline designer. With the huge number of integrations being added, we wanted to make it easier to find components and keep the pipeline building experience quick and intuitive. We've added the ability to choose which output connection the new component should be added to, meaning less reconfiguring for you to do after it's added.

Once your data loading components are on the canvas, you might notice the new Sample tab. This means you can sample loaded data directly from the load component without the need for a transformation job, letting you check your data loads immediately and troubleshoot as you build.

It's pretty straightforward:

  1. Click on your validated Query component (for example, Excel Query).
  2. Click the Sample tab.
  3. Click Sample data.

Sampling inside transformation pipelines is also better than ever now that it reports a total row count for the table data, making it easier than before to verify your load is working as expected. Just hit the refresh button on the Sample tab to get a row count.


Keep custom connecting

We've added an import feature to Custom Connector that allows you to import existing custom connectors from a Matillion ETL instance, making switching to the Data Productivity Cloud easier than ever.

Sample data

Flex Connectors Galore!

Designer Data Loader New features πŸŽ‰

We're thrilled to announce the integration of Flex connectors to the Designer and the addition of new connectors for batch pipelines, offering you even more flexibility and connectivity options for seamless data integration.

Flex Connectors Now in Designer

The Designer now supports Flex connectors! Flex connectors can added directly to the canvas when building pipelines and are like a preconfigured blueprint for that service. You can edit this blueprint whenever you like to update all of the components created from it, making it easier than ever to keep multiple pipelines up to date. You can now easily connect to the following 20 endpoints... deep breath!

ActiveCampaign, Amplitude, Brevo, CircleCI, Concord, Confluence, Datadog, Delighted, Freshdesk, Klaviyo, LaunchDarkly, PagerDuty, Pendo, Productboard, Recurly, Smartsheet, Snyk, TikTok, Toggl, Twilio

Check out Flex connectors for more information on Flex connectors in Designer.

Data Loader Flex connectors

In the batch pipelines department, we're also welcoming new Flex connectors!

  • CircleCI
  • Salesforce Pardot
  • Slack
  • TikTok
  • Dropbox

Don't see the connector you need? Let us know at the Ideas Portal.

Enhanced Designer Features and New Flex Connector!

Designer Data Loader New features πŸŽ‰

We're excited to bring you a suite of enhancements in the Designer and a new addition to the batch pipelines' Flex connectors. Here's a breakdown of the latest features designed to streamline your workflow.

Designer Updates

  • Assert View Component: We've introduced the Assert View component, enabling you to verify specific conditions of a view and halt the query if the conditions aren't met, ensuring data accuracy and integrity.
  • Revamped Add Components Panel: Finding the component you need is now a breeze with the updated panel that lists components alphabetically, un-nested, and paired with descriptor keywords like "Connectors" or "Flow" for easy filtering and search.
  • Enhanced Pipeline Canvas: The addition of a + call-to-action button streamlines the component addition process. It appears when a component is selected, and clicking it opens the Add component dialog for a smooth, intuitive workflow.
  • Table Update Component: Say hello to the new Table Update component that allows you to update a target table with input rows based on matching keys, enhancing data manipulation and management.
  • Refined Git Operations: The Git Commit changes and push option are now two separate operations, offering greater control and flexibility in managing your version control tasks.

New Flex Connector for Batch Pipelines

Batch pipelines are now enriched with the Concord Flex Connector, enhancing your options and efficiency in developing batch pipelines.

Expanding Batch Pipelines with New Flex Connectors!

Data Loader New features πŸŽ‰

We're extending your data integration options with the introduction of new Flex connectors for batch pipelines:

  • ActiveCampaign Connector: Streamline the integration and management of your marketing data with ease.
  • Confluence Connector: Optimize the extraction and handling of your Confluence data for enhanced collaboration and insights.
  • Eventbrite Connector: Simplify the management of event data, making it quicker and more efficient.
  • Recurly Connector: Enhance the processing and analysis of your subscription data seamlessly.

Flex connectors are designed to be both quick to use and customisable. They are built on the Custom Connector framework to be highly customisable, yet are preconfigured to quickly connect to your data. Check out Flex connectors for more information on Flex connectors.

Enhanced Designer and Batch Pipelines with New Functionalities and Connectors!

Designer Data Loader New features πŸŽ‰Improvements πŸ”§

This week we've rolled out of several updates enhancing both the Designer and batch pipelines. From added support for different source types and database components to new connectors, here's a detailed look at the recent enhancements.

Designer Boosts

  • Data Transfer Enhancement: The Data Transfer component in the Designer now supports SFTP as a source type, broadening your options for secure data transfer.
  • Oracle Integration: Oracle has been added as a supported database type in the Database Query and RDS Query components, opening up new avenues for data extraction and manipulation.
  • Multi-line Secret Values Support: Users of Full SaaS, Matillion hosted projects can now include multi-line secret values when creating secret definitions, adding an extra layer of security and flexibility.
  • Scheduling Made Easy: We've included an 'Add schedule' call-to-action button in the Designer UI, making the navigation to the 'Create a new schedule' menu straightforward.
  • Streamlined Cloud Credentials Association: Now, you can associate newly created cloud credentials to an environment in one workflow, simplifying the setup and management process.
  • Git β€˜Hard Reset' Functionality: A new "hard reset" option for Git in Designer allows users to reset their branch to the last local commit, enhancing version control.
  • In addition to these, the Google Ads Query component has joined the Designer's component family, offering tailored functionalities for extracting and handling Google Ads data.

Batch Pipelines Connector

Amplitude now has a connector in Data Loader's batch pipelines. Easily connect to Amplitude and pull in one or more data sources straight to data warehouse tables.

Enhancements in Batch Pipelines, Hub, and CDC Agent!

Data Loader Agents Hub New features πŸŽ‰

We're excited to roll out our latest features and improvements to the Data Productivity Cloud. Here's what's new and improved in batch pipelines, Hub, and CDC agent.

Batch Pipelines Connectors

We've introduced new Flex connectors - Brevo, Delighted, and Toggl for use in Data Loader's batch pipelines. Flex connectors bring the best of both worlds with out-of-the-box connectivity as well as being customisable and expandable.

Hub Features

For our registered Hub customers with Account and User Administrator privileges, you now have the flexibility to edit the Account name and Subdomain name directly. We've also enhanced the Pipeline Observability dashboard to include visibility into Data Loader Batch pipeline runs. Plus, to make diagnosing issues simpler, pipeline error messages will now be prominently displayed at the top of the Pipeline run details page.

CDC Agent Improvements

We've updated the CDC agent to version 2.87.8. A new transformation type, "Copy Table With Soft Deletes," has been introduced for pipelines with Snowflake as a destination. We've also fixed an issue with Snowflake role names and improved connection reuse for enhanced performance and reliability when using Snowflake as a destination. For more information on agents see the documentation.

New Components and a Flex Connector to Enhance Your Data Workflows!

Designer Data Loader New features πŸŽ‰

We're here with another round of exciting updates! The Designer has welcomed a series of new components to broaden your data integration and transformation capabilities, and batch pipelines have a new Flex connector onboard.

Newly Added Designer Components

Expanding your data handling options, we've introduced several new components to the Designer:

  • Azure Blob Storage Load and Unload for efficient data loading and extraction with Azure Blob Storage.
  • Dynamics 365 Query to seamlessly pull and manipulate your Dynamics 365 data.
  • Facebook Ads Query and Facebook Query to integrate and handle data from Facebook and Facebook Ads efficiently.
  • Salesforce Marketing Cloud Query to unlock easy access and manipulation of your marketing data housed in Salesforce.

A New Addition to Batch Pipelines

Batch pipelines also got a boost with the addition of the Snapchat Flex Connector, offering tailored connectivity to develop batch pipelines, ensuring a more streamlined and effective data transformation process.

Cloud Credentials Storage, Expanded SaaS Support, and More!

Designer Data Loader New features πŸŽ‰Improvements πŸ”§

We're excited to share our latest updates designed to optimize your experience with enhanced features in cloud credential storage, component support, and an array of new Flex connectors. Here's the lowdown.

Designer Enhancements

Users can now store cloud provider credentials within the Designer, streamlining authentication with AWS and Azure. We've broadened the scope of components supported in a Matillion Full SaaS environment, including Data Transfer, Excel Query, File Iterator, RDS Bulk Output, RDS Query, S3 Load, S3 Unload, SNS Message, and SQS Message. Plus, utilizing an S3 location as the stage platform on all query components is now a breeze in a Matillion Full SaaS setting.

CDC Pipeline Upgrades

The CDC agent has advanced to version 2.87.1. Now, pipelines with Db2 for IBM i as a source can benefit from on-demand snapshots. We've introduced the Change Log as a transformation type for pipelines targeting Snowflake. Also, the "Create Pipeline" pages got a facelift with an in-client Help bar, offering contextual documentation to assist you seamlessly through the creation process.

Batch Pipelines Boost

Meet the new arrivals in Flex connectors – PagerDuty, Snyk, Datadog, Freshdesk, Klaviyo, LaunchDarkly, Productboard, Smartsheet, and Twilio – each designed to amplify your batch pipeline development and data transformation capabilities.

New Components, Enhanced UI, and CDC Agent Upgrades!

Designer CDC New features πŸŽ‰Improvements πŸ”§

We're excited to unveil a series of updates, featuring new components for the Designer, an enhanced user interface, and significant improvements to the CDC Agent.

Designer's Growing Suite of Components

The Designer has been enriched with the addition of the Marketo Query component, making integration with Marketo smoother than ever. Google BigQuery Query and NetSuite Query also joined the roster, offering more versatility in querying data.

We've revamped the UI for a streamlined user experience. The β€˜+' button is now an β€˜Add' button accompanied by a context menu, simplifying the process of creating pipelines and folders. For those stepping into an instance of Designer with no existing pipelines, a 'Getting Started' wizard is at your service to make the setup a breeze.

CDC Agent Gets a Revamp

The CDC Agent was upgraded to version 2.83.5, with several imrprovements to snapshotting reliability. Snowflake users will also benefit from the additional validation of the stage format.

We also introduced β€˜rs_id' and β€˜ssn' metadata fields to the Oracle change records, providing more detailed insights for effective data tracking and management.

Bing Ads Query Component, Enhanced Collaboration, and a New Flex Connector!

Designer Data Loader New features πŸŽ‰Improvements πŸ”§

The Designer welcomes the Bing Ads Query component, a new feature to boost team collaboration, and a fresh Flex connector for batch pipelines. Here's a quick overview!

Bing Ads Query

Harness the power of the Bing Ads Query component in the Designer for efficient data extraction and handling from Bing Ads.

Boosting Collaboration

We've added an β€˜Invite your Teammates' item to the help widget's Task checklist in the Designer, making team collaboration straightforward and efficient.

New Flex Connector

For batch pipelines, meet the new Pendo Flex connector, expanding your options and making data transformation more versatile.

New Components, Organizational Features, and Direct to Snowflake for CDC!

Designer Data Loader New features πŸŽ‰Improvements πŸ”§

With the addition of new components in the Designer, intuitive organizational features, and updates to the CDC agent, navigating through your data transformation tasks just got easier.

Designer Enhancements

We've introduced the File Iterator and Stream Input components to expand your data processing capabilities. Organizing your pipelines is now a breeze with the newly added ability to sort them into folders, ensuring a cleaner and more efficient workspace. Tooltips have been added to offer instant insights on pipelines and components.

Designer also now has the File Iterator component, allowing your orchestration jobs to loop through files in storage such as S3, opening up many possibilities for adding automation logic to your workflows.

The new Stream Input transformation component allows you to read chosen columns from a Snowflake stream. Be sure to create a stream using the Create Stream component, first!

CDC Agent and Pipelines update

The CDC Agent isn't left behind in this wave of updates; we've moved up to version 2.81.12. For those working with Snowflake, you'll be pleased to know that Direct to Snowflake has now been added as a target.