Skip to content

Query Result To Scalar

The Query Result To Scalar component enables users to write any custom SQL query that returns a scalar value. This value can then be mapped to a project or pipeline variable for use in other pipeline components.


Properties

Name = string

A human-readable name for the component.


Mode = drop-down

  • Basic: This mode will build a query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this mode will be sufficient.
  • Advanced: This mode will require you to write an SQL-like query to call data from Snowflake. The available fields and their descriptions are documented in the data model.

Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Table = string

The name of the table to be queried.


Table Columns = dual listbox

Select which columns to take from the table as part of the query.


Order By = dual listbox

Choose the columns by which to sort rows. If multiple columns are selected, rows are sorted by the first-listed column first, then by the next listed column, and so on.


Sort = drop-down

Select whether rows are sorted in Ascending or Descending order.


Limit = integer

Set the limit of returned rows.


Filter Conditions = column editor

  • Input Column Name: Select an input column. The available input columns vary depending upon the data source.
  • Qualifier:
    • Is: Compares the column to the value using the comparator.
    • Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
  • Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character % to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
  • Value: The value to be compared.

Combine Condition = drop-down

Select whether to use the defined filters in combination with one another according to either And or Or.


Query = code editor

This property opens an editor. On the left, users can explore tables and their metadata from environments. Both environment and job variables are also listed in the bottom-left.

SQL queries can be written in the main panel and tested using the Sample button, which will display results below.

This property is only available when Mode is set to Advanced.

Warning

Do not end SQL statements with a semicolon in this component.


Scalar Variable Mapping = column editor

Scalar results from the SQL query can be mapped to project and pipeline variables.

Use the Input Column Name drop-down to select a scalar returned by the query. Use the Scalar Variable Name drop-down to select a variable to map the scalar to. Click + to add more mappings.

Note

Variables used in the mapping must be created before this component runs. Read variables for details of how to create variables.

Name = string

A human-readable name for the component.


Mode = drop-down

  • Basic: This mode will build a query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this mode will be sufficient.
  • Advanced: This mode will require you to write an SQL-like query to call data from Redshift. The available fields and their descriptions are documented in the data model.

Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.


Table = string

The name of the table to be queried.


Table Columns = dual listbox

Select which columns to take from the table as part of the query.


Order By = dual listbox

Choose the columns by which to sort rows. If multiple columns are selected, rows are sorted by the first-listed column first, then by the next listed column, and so on.


Sort = drop-down

Select whether rows are sorted in Ascending or Descending order.


Limit = integer

Set the limit of returned rows.


Filter Conditions = column editor

  • Input Column Name: Select an input column. The available input columns vary depending upon the data source.
  • Qualifier:
    • Is: Compares the column to the value using the comparator.
    • Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
  • Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character % to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
  • Value: The value to be compared.

Combine Condition = drop-down

Select whether to use the defined filters in combination with one another according to either And or Or.


Query = code editor

This property opens an editor. On the left, users can explore tables and their metadata from environments. Both environment and job variables are also listed in the bottom-left.

SQL queries can be written in the main panel and tested using the Sample button, which will display results below.

This property is only available when Mode is set to Advanced.

Warning

Do not end SQL statements with a semicolon in this component.


Scalar Variable Mapping = column editor

Scalar results from the SQL query can be mapped to project and pipeline variables.

Use the Input Column Name drop-down to select a scalar returned by the query. Use the Scalar Variable Name drop-down to select a variable to map the scalar to. Click + to add more mappings.

Note

Variables used in the mapping must be created before this component runs. Read variables for details of how to create variables.


Snowflake Databricks Amazon Redshift (preview)