Skip to content

Microsoft SQL Server Load

Note

For new data pipelines, the Microsoft SQL Server Load component supersedes the option to use the Database Query component to retrieve data from Microsoft SQL Server. Existing pipelines that use the Database Query component to fetch data from Microsoft SQL Server will continue to work as expected.

The Microsoft SQL Server Load component runs SQL queries on an accessible Microsoft SQL Server database, and copies the results to a table via storage. You can query cloud or on-premises databases, so long as they are network-accessible. You can stage data (load data into a table) with this component, to perform further processing and transformations on it. The target table should be considered temporary, because it will either be truncated or recreated each time the component runs.

If the component requires access to a cloud provider, it will use the cloud credentials associated with your environment to access resources.

To stage data to Azure Blob Storage, the Azure credentials associated with your environment must be assigned the Storage Blob Data Contributor role. For more information, read User assigned with the Storage Blob Data Contributor role.


Properties

Reference material is provided below for the Connect, Configure, Destination, and Advanced Settings properties.

Connect

Authentication Method = drop-down

Currently supports Username & Password, or can be left empty to use the Connection Options property.


Username = string

Your Microsoft SQL Server username. Optional because authentication can also be performed using the Connection Options property.


Password = drop-down

Choose the secret definition that represents your credentials for this connector. Optional because authentication can also be performed using the Connection Options property.

If you have not already saved your credentials for this connector as a secret definition, click Add secret to create a secret definition representing these credentials. Read Secrets and secret definitions for details about creating a secret definition.


Connection URL = string

The URL for your chosen Microsoft SQL Server database. The general pattern of the URL is jdbc:jtds:sqlserver://<host>/<database>.

Make appropriate substitutions for the <host> and <database> parameters in these URL strings. Although many parameters and options can be added to the end of the URL, it is generally easier to add them in the Connection Options property.


Connection Options = column editor

  • Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
  • Value: A value for the given parameter.

Click the Text Mode toggle at the bottom of the Connection Options dialog to open a multi-line editor that lets you add items in a single block. For more information, read Text mode.


SSH Tunnel = drop-down

Select an SSH Tunnel from the list of Network items. For detailed usage instructions, read the SSH Tunneling documentation.

Note

If selected, the Connection URL will be the data source that your secure tunnel connects to.

Configure

Load Type = drop-down

  • Full Load: Select this option to load your entire dataset.
  • Incremental Load: Only available for Snowflake. Select this option to only load new and updated records from your dataset.

Mode = drop-down

  • Basic: This mode will build a query for you using settings from the Schema, Data Source, Data Selection, Data Source Filter, Combine Filters, and Limit parameters. In most cases, this mode will be sufficient.
  • Advanced: This mode will require you to write an SQL-like query to call data from the service you're connecting to. The available fields and their descriptions are documented in the data model.

    Note

    Advanced mode is currently not supported when Incremental Load is selected.

There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.

Note

While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.


SQL Query = code editor

This is an SQL-like SELECT query, written in the SQL accepted by your cloud data warehouse. Treat collections as table names, and fields as columns. Only available in Advanced mode.


Schema = drop-down

Select the database schema to load data from.


Data Source = drop-down

Select a single data source to be extracted from the source system and loaded into a table in the destination. The source system defines the data sources available. Use multiple components to load multiple data sources.


Data Selection = dual listbox

Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.

To use grid variables, tick the Use Grid Variable checkbox at the bottom of the Data Selection dialog.


Data Source Filter = column editor

Define one or more filter conditions that each row of data must meet to be included in the load.

  • Input Column: Select an input column. The available input columns vary depending upon the data source.
  • Qualifier:
    • Is: Compares the column to the value using the comparator.
    • Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
  • Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character % to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
  • Value: The value to be compared.

Click the Text Mode toggle at the bottom of the Connection Options dialog to open a multi-line editor that lets you add items in a single block. For more information, read Text mode.


Combine Filters = drop-down

The data source filters you have defined can be combined using either And or Or logic. If And, then all filter conditions must be satisfied to load the data row. If Or, then only a single filter condition must be satisfied. The default is And.

If you have only one filter, or no filters, this parameter is essentially ignored.


Row Limit = integer

Set a numeric value to limit the number of rows that are loaded. The default is an empty field, which will load all rows.


High-Water Mark Selection = drop-down

When Incremental Load is selected, select a datetime field from your dataset that is always updated when your data changes, such as SystemModstamp. The connector will record the maximum value of this field each time you run this pipeline. On subsequent runs when Incremental Load is selected, only data with a higher value in this field will be loaded.


Destination

Destination = drop-down

Select the destination for your data. This is either in Snowflake as a table or as files in cloud storage.

  • Snowflake: Load your data into a table in Snowflake. The data must first be staged via Snowflake or a cloud storage solution.
  • Cloud Storage: Load your data directly into files in your preferred cloud storage location. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.

Click either the Snowflake or Cloud Storage tab on this page for documentation applicable to that destination type.

Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value [Environment Default] uses the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database to access. The special value [Environment Default] uses the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value [Environment Default] uses the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Table Name = string

The name of the table to be created in your Snowflake database. You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.


Load Strategy = drop-down

Define what happens if the table name already exists in the specified Snowflake database and schema.

  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
  • Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.

Primary Keys = dual listbox (optional)

Select one or more columns to be designated as the table's primary key.


Clean Staged files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Stage Access Strategy = drop-down (optional)

Select the stage access strategy. The strategies available depend on the cloud platform you select in Stage Platform.

  • Credentials: Connects to the external stage (AWS, Azure) using your configured cloud provider credentials. Not available for Google Cloud Storage.
  • Storage Integration: Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the Storage Integration property, through which you can select any of your existing Snowflake storage integrations.

Stage Platform = drop-down

Use the drop-down menu to choose where the data is staged before being loaded into your Snowflake table.

  • Amazon S3: Stage your data on an AWS S3 bucket.
  • Snowflake: Stage your data on a Snowflake internal stage.
  • Azure Storage: Stage your data in an Azure Blob Storage container.
  • Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.

Click one of the tabs below for documentation applicable to that staging platform.

Storage Integration= drop-down

Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.


Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Internal Stage Type = drop-down

Select the Snowflake internal stage type. Use the Snowflake links provided to learn more about each type of stage.

  • User: Each Snowflake user has a user stage allocated to them by default for file storage. You may find the user stage convenient if your files will only be accessed by a single user, but need to be copied into multiple tables.
  • Named: A named stage provides high flexibility for data loading. Users with the appropriate privileges on the stage can load data into any table. Furthermore, because the stage is a database object, any security or access rules that apply to all objects will apply to the named stage.

Named stages can be altered and dropped. User stages cannot.


Named Stage = drop-down

Select your named stage. Read Creating a named stage to learn how to create a new named stage.

Warning

There is a known issue where named stages that include special characters or spaces are not supported.

Storage Integration= drop-down

Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.


Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

Stage Access Strategy = drop-down (optional)

Select the stage access strategy. The strategies available depend on the cloud platform you select in Stage Platform.

  • Storage Integration: Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the Storage Integration property, through which you can select any of your existing Snowflake storage integrations.

Storage Integration= drop-down

Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.


GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Overwrite = boolean

Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.

Load Strategy = drop-down (optional)

  • Append Files in Folder: Appends files to storage folder. This is the default setting.
  • Overwrite Files in Folder: Overwrite existing files with matching structure.

See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

Configuration Description
Append files in folder with defined folder path and file prefix. Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1.
Append files in folder without defined folder path and file prefix. Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1.
Overwrite files in folder with defined folder path and file prefix. Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten.
Overwrite files in folder without defined folder path and file prefix. Validation will fail. Folder path and file prefix must be supplied for this load strategy.

Folder Path = string (optional)

The folder path for the files to be written to. Note that this path follows, but does not include, the bucket or container name.


File Prefix = string (optional)

A string of characters that precedes the name of the written files. This can be useful for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into files for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Overwrite = boolean

Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.


Advanced Settings

Fetch Size = integer (optional)

Specify the batch size of rows to fetch at a time, for example, 500. When left blank, the chosen database's driver default fetch size is used.


Parse 'Null' & Empty Strings as NULL = boolean

Converts common strings that represent null into a null value. This is case-sensitive and works with the following strings: "", "NULL", "NUL", "Null", "null". The default is No.

Note

Currently, this property is only applicable when using Snowflake as your destination.


Trim String Columns = boolean

When Yes, remove leading and trailing characters from a string column. The default is No.