Skip to content

Table Input

Read chosen columns from an input table or view into the job.

This component will also work with external tables in Snowflake, Amazon Redshift, and Google BigQuery.

Properties

Name = string

A human-readable name for the component.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Table Name = drop-down

The name of the input table or view. The tables and views found in the currently selected environment are provided to choose from.

You can change the currently selected environment in the Environments section.


Column Names = dual listbox

Once the Table Name is set, the columns become available to choose from. Select which columns to pass along.

If you don't like the column names, consider using a Rename component to change them.


Offset = drop-down

Offsets the table contents by the number of specified seconds. This is a function of Snowflake's Time Travel feature, allowing you to see a table as it was X seconds ago.

Name = string

A human-readable name for the component.


Catalog = drop-down

Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Matillion ETL environment setup. Selecting a catalog will determine which databases are available in the next parameter.


Database = drop-down

Select the Delta Lake database. The special value, [Environment Default], will use the database specified in the Matillion ETL environment setup.


Table = drop-down

Select the table. The tables available are those found in the currently selected database.


Column Names = dual listbox

Select which columns to load from the selected table.


Offset Type = drop-down

Select the OFFSET. The default is None.


Offset = drop-down

Select the timestamp or version to offset. This is a function of Delta Lake's Time Travel feature, allowing you to see a table as it was X seconds ago or at a defined version. This property is hidden when Offset Type is None.

Name = string

A human-readable name for the component.


Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.


Table Name = drop-down

The name of the input table or view. The tables and views found in the currently selected environment are provided to choose from.

You can change the currently selected environment in the Environments section.


Column Names = dual listbox

Once the Table Name is set, the columns become available to choose from. Select which columns to pass along.

If you don't like the column names, consider using a Rename component to change them.


Trim Columns = drop-down

Wraps the column names in a BTRIM function, which will strip out all the leading and trailing spaces. See the Redshift documentation for details.

Name = string

A human-readable name for the component.


Target Project = drop-down

Select the Google Cloud project. The special value, [Environment Default], will use the project defined in the environment. For more information, read Creating and managing projects.


Dataset = drop-down

Select the Google BigQuery dataset to load data into. The special value, [Environment Default], will use the dataset defined in the environment. For more information, read Introduction to datasets.


Target Table = drop-down

The name of the input table or view. The tables and views found in the currently selected environment are provided to choose from.

You can change the currently selected environment in the Environments section.


Column Names = dual listbox

Once the Table Name is set, the columns become available to choose from. Select which columns to pass along.

If you don't like the column names, consider using a Rename component to change them.


Include Partition Time = drop-down

Opt whether to include the '_PARTITIONTIME' pseudo-column from the partitioned table that holds timestamps for the data in the table. This property is only visible when a partitioned table is selected in the 'Target Table' property. Partitioned tables can be created through the 'Partitioning' property in the Create Table Orchestration component.


Partition Time Alias = string

Choose a name for a new column that will take on the '_PARTITIONTIME' pseudo-column data. This name cannot be the same as another column already in the table. This property is only visible when a partitioned table is selected in the 'Target Table' property. Partitioned tables can be created through the 'Partitioning' property in the Create Table Orchestration component.

Name = string

A human-readable name for the component.


Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on schemas, read the Azure Synapse documentation.


Table = drop-down

Select the table. The tables available are those found in the currently selected environment.


Column Names = dual listbox

Once the Table Name is set, the columns become available to choose from. Select which columns to pass along.

If you don't like the column names, consider using a Rename component to change them.


Strategy

Generates an SQL-like SELECT query.


Snowflake Delta Lake on Databricks Amazon Redshift Google BigQuery Azure Synapse Analytics