Skip to content

Table Iterator

The Table Iterator component lets users loop over rows of data within a table or view and run another component for each row.

This component implements a simple loop over rows of data in an existing table. It enables you to run an attached component multiple times, each time with different values set on any number of variables taken from columns of that table. Those variables can then be referenced from the attached component.

To attach the iterator to another component, use the connection ring beneath the file iterator to connect to the input of the other component. The two components will automatically "snap" together, with the file iterator sitting on top of the other component, and can be dragged around the canvas as a single component. To uncouple the two components, delete the table iterator component.

If you need to iterate more than one component, put them into a separate orchestration pipeline or transformation pipeline and use a Run Transformation or Run Orchestration component attached to the iterator. In this way, you can run an entire pipeline flow multiple times, once for each row of variable values.

The iterations are set up in advance, so the connection to the input table can be closed before any iterations are performed. If the attached component modifies the iteration table, those changes will not be reflected during the current run. Furthermore, to control runaway processes and control resources, only a limited number of rows of a table are considered for iteration. If you are iterating many rows, it is instead recommended that you use the table in a transformation pipeline, and join the table being iterated instead.

All iterator components are limited to a maximum 5000 iterations.


Name = string

A human-readable name for the component.

Mode = drop-down

  • Basic: Construct the query using the clickable properties. This is the default setting.
  • Advanced: Manually write the SQL query.

Query = code editor

Manually write the SQL query if Mode is Advanced.

Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more. Only if Mode is Basic.

Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more. Only if Mode is Basic.

Target Table = drop-down

The name of the target table. Only if Mode is Basic.

Concurrency = drop-down

  • Concurrent: Iterations are run concurrently.
  • Sequential: Iterations are done in sequence, waiting for each to complete before starting the next. This is the default setting.

Full SaaS deployments are limited to 20 concurrent tasks, with additional tasks being queued. Hybrid SaaS deployments with customer-hosted agents have 20 concurrent tasks per agent instance, with a maximum of 100 instances if configured accordingly.

Column Mapping = column editor

Map the columns in the target table to environment variables. Click + to add a mapping and enter the following:

  • Column Name: Select a table column name.
  • Variable Name: Select an existing variable name.

Order By = dual listbox

Move columns to the right-hand box to define the "order by" sequence.

This property is only available when Concurrency is set to Sequential.

Sort = drop-down

Choose to order by ascending or descending. The default is ascending.

This property is only available when Concurrency is set to Sequential.

Break on Failure = drop-down

  • No: Attempt to run the attached component for each iteration, regardless of success or failure. This is the default setting.
  • Yes: If the attached component does not run successfully, fail immediately.

If a failure occurs during any iteration, the failure link is followed. This parameter controls whether it is followed immediately or after all iterations have been attempted.

This property is only available when Concurrency is set to Sequential. When set to Concurrent, all iterations will be attempted.

Snowflake Databricks Amazon Redshift (preview)