We're also happy to announce the new Query Result to Scalar component that lets you write a custom SQL query and map its scalar output to a variable. Variables can be used in other components or even passed to Transformation pipelines.
Query Result to Scalar means you can have jobs dynamically react to live data and act accordingly. The component also comes with a basic mode so you can build a query without requiring any SQL at all!
Batch pipelines allow you to quickly grab as many sources as you like from your service and load them into a data warehouse... but the real power comes from scheduling these to run more frequently! We've added deeper support for Quartz cron expressions so you can have total control over your schedule frequencies.
Earlier this quarter, we released direct-to-Snowflake for CDC pipelines which allows you to replicate your data directly to your Snowflake data warehouse. We're now expanding that with a new "Copy Table with Soft Deletes" feature. This allows you to produce a replica in Snowflake of each captured source table, but rows deleted in the source are not deleted in the target table. As well as each column mapped from the corresponding source table, an additional boolean column called MTLN_CDC_DELETED is included that tells you whether that row is deleted in the source or not.
We're committed to making your data loading journey as simple as possible and we're backing that up with yet more native integrations this week. Check out the following list of connector components available in the Designer today:
We're also happy to announce Azure Storage as a storage destination for many of our connectors, letting you land your data directly into Blob storage. We're starting with the following connectors, with many more on their way! Here we go...
A new week, and new improvements in the Data Productivity Cloud's pipeline designer. With the huge number of integrations being added, we wanted to make it easier to find components and keep the pipeline building experience quick and intuitive. We've added the ability to choose which output connection the new component should be added to, meaning less reconfiguring for you to do after it's added.
Once your data loading components are on the canvas, you might notice the new Sample tab. This means you can sample loaded data directly from the load component without the need for a transformation job, letting you check your data loads immediately and troubleshoot as you build.
It's pretty straightforward:
Click on your validated Query component (for example, Excel Query).
Click the Sample tab.
Click Sample data.
Sampling inside transformation pipelines is also better than ever now that it reports a total row count for the table data, making it easier than before to verify your load is working as expected. Just hit the refresh button on the Sample tab to get a row count.
We're thrilled to announce the integration of Flex connectors to the Designer and the addition of new connectors for batch pipelines, offering you even more flexibility and connectivity options for seamless data integration.
The Designer now supports Flex connectors! Flex connectors can added directly to the canvas when building pipelines and are like a preconfigured blueprint for that service. You can edit this blueprint whenever you like to update all of the components created from it, making it easier than ever to keep multiple pipelines up to date. You can now easily connect to the following 20 endpoints... deep breath!
We're excited to bring you a suite of enhancements in the Designer and a new addition to the batch pipelines' Flex connectors. Here's a breakdown of the latest features designed to streamline your workflow.
Assert View Component: We've introduced the Assert View component, enabling you to verify specific conditions of a view and halt the query if the conditions aren't met, ensuring data accuracy and integrity.
Revamped Add Components Panel: Finding the component you need is now a breeze with the updated panel that lists components alphabetically, un-nested, and paired with descriptor keywords like "Connectors" or "Flow" for easy filtering and search.
Enhanced Pipeline Canvas: The addition of a + call-to-action button streamlines the component addition process. It appears when a component is selected, and clicking it opens the Add component dialog for a smooth, intuitive workflow.
Table Update Component: Say hello to the new Table Update component that allows you to update a target table with input rows based on matching keys, enhancing data manipulation and management.
Refined Git Operations: The Git Commit changes and push option are now two separate operations, offering greater control and flexibility in managing your version control tasks.
We're extending your data integration options with the introduction of new Flex connectors for batch pipelines:
ActiveCampaign Connector: Streamline the integration and management of your marketing data with ease.
Confluence Connector: Optimize the extraction and handling of your Confluence data for enhanced collaboration and insights.
Eventbrite Connector: Simplify the management of event data, making it quicker and more efficient.
Recurly Connector: Enhance the processing and analysis of your subscription data seamlessly.
Flex connectors are designed to be both quick to use and customisable. They are built on the Custom Connector framework to be highly customisable, yet are preconfigured to quickly connect to your data. Check out Flex connectors for more information on Flex connectors.
This week we've rolled out of several updates enhancing both the Designer and batch pipelines. From added support for different source types and database components to new connectors, here's a detailed look at the recent enhancements.
Data Transfer Enhancement: The Data Transfer component in the Designer now supports SFTP as a source type, broadening your options for secure data transfer.
Oracle Integration: Oracle has been added as a supported database type in the Database Query and RDS Query components, opening up new avenues for data extraction and manipulation.
Multi-line Secret Values Support: Users of Full SaaS, Matillion hosted projects can now include multi-line secret values when creating secret definitions, adding an extra layer of security and flexibility.
Scheduling Made Easy: We've included an 'Add schedule' call-to-action button in the Designer UI, making the navigation to the 'Create a new schedule' menu straightforward.
Streamlined Cloud Credentials Association: Now, you can associate newly created cloud credentials to an environment in one workflow, simplifying the setup and management process.
Git ‘Hard Reset' Functionality: A new "hard reset" option for Git in Designer allows users to reset their branch to the last local commit, enhancing version control.
In addition to these, the Google Ads Query component has joined the Designer's component family, offering tailored functionalities for extracting and handling Google Ads data.
We've introduced new Flex connectors - Brevo, Delighted, and Toggl for use in Data Loader's batch pipelines. Flex connectors bring the best of both worlds with out-of-the-box connectivity as well as being customisable and expandable.
For our registered Hub customers with Account and User Administrator privileges, you now have the flexibility to edit the Account name and Subdomain name directly. We've also enhanced the Pipeline Observability dashboard to include visibility into Data Loader Batch pipeline runs. Plus, to make diagnosing issues simpler, pipeline error messages will now be prominently displayed at the top of the Pipeline run details page.
We've updated the CDC agent to version 2.87.8. A new transformation type, "Copy Table With Soft Deletes," has been introduced for pipelines with Snowflake as a destination. We've also fixed an issue with Snowflake role names and improved connection reuse for enhanced performance and reliability when using Snowflake as a destination. For more information on agents see the documentation.
We're here with another round of exciting updates! The Designer has welcomed a series of new components to broaden your data integration and transformation capabilities, and batch pipelines have a new Flex connector onboard.
Batch pipelines also got a boost with the addition of the Snapchat Flex Connector, offering tailored connectivity to develop batch pipelines, ensuring a more streamlined and effective data transformation process.
We're excited to share our latest updates designed to optimize your experience with enhanced features in cloud credential storage, component support, and an array of new Flex connectors. Here's the lowdown.
Users can now store cloud provider credentials within the Designer, streamlining authentication with AWS and Azure. We've broadened the scope of components supported in a Matillion Full SaaS environment, including Data Transfer, Excel Query, File Iterator, RDS Bulk Output, RDS Query, S3 Load, S3 Unload, SNS Message, and SQS Message. Plus, utilizing an S3 location as the stage platform on all query components is now a breeze in a Matillion Full SaaS setting.
The CDC agent has advanced to version 2.87.1. Now, pipelines with Db2 for IBM i as a source can benefit from on-demand snapshots. We've introduced the Change Log as a transformation type for pipelines targeting Snowflake. Also, the “Create Pipeline” pages got a facelift with an in-client Help bar, offering contextual documentation to assist you seamlessly through the creation process.
Meet the new arrivals in Flex connectors – PagerDuty, Snyk, Datadog, Freshdesk, Klaviyo, LaunchDarkly, Productboard, Smartsheet, and Twilio – each designed to amplify your batch pipeline development and data transformation capabilities.
The Designer has been enriched with the addition of the Marketo Query component, making integration with Marketo smoother than ever. Google BigQuery Query and NetSuite Query also joined the roster, offering more versatility in querying data.
We've revamped the UI for a streamlined user experience. The ‘+' button is now an ‘Add' button accompanied by a context menu, simplifying the process of creating pipelines and folders. For those stepping into an instance of Designer with no existing pipelines, a 'Getting Started' wizard is at your service to make the setup a breeze.
We've introduced the File Iterator and Stream Input components to expand your data processing capabilities. Organizing your pipelines is now a breeze with the newly added ability to sort them into folders, ensuring a cleaner and more efficient workspace. Tooltips have been added to offer instant insights on pipelines and components.
Designer also now has the File Iterator component, allowing your orchestration jobs to loop through files in storage such as S3, opening up many possibilities for adding automation logic to your workflows.
The CDC Agent isn't left behind in this wave of updates; we've moved up to version 2.81.12. For those working with Snowflake, you'll be pleased to know that Direct to Snowflake has now been added as a target.
We're rolling out updates to enhance your experience: a revamped code editor for a more intuitive coding journey, additional components to broaden your data handling capabilities, and updates to the CDC Agent for an enriched performance. Here's a detailed look!
The Designer's code editor has been refined. Now featuring syntax highlighting and IntelliSense for quicker code writing and reviewing, plus enhanced validation support for SQL, Python, and Bash. Access a suite of commands with a simple tap on F1 and smoothly navigate your code with Visual Studio Code shortcut keys.
We've updated the CDC Agent to version 2.81.6. Oracle Signal Table validation on PDBs and an updated "Unavailable Value" placeholder in the PostgreSQL connector are part of this update, offering more tools at your disposal.