Skip to content

Text Output

The Text Output component creates text files on a specified Amazon S3 bucket, and loads the text files with data from an Amazon Redshift table or view.

The data can be output to multiple files based on a "per file row count".

This component is similar in effect to the S3 Unload component. However, S3 Unload unloads data in parallel directly from Redshift to S3 and may be faster.


Properties

Name = string

A human-readable name for the component.


Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.


Table Name = string

The table or view to unload to Amazon S3.


S3 URL Location = string

The URL of the S3 bucket to load the data into.


S3 Object Prefix = string

Create data files in S3 beginning with this prefix. The format of the output is: <prefix>_<file-number>


Delimiter = string

Defaults to a comma-separator string between values.


Compress Data = drop-down

Whether or not the resultant files on the S3 bucket are to be compressed into a gzip file.


Null As = string

Replace NULL in the input data with the specified string in the output.


Output Type = drop-down

  • CSV: If the value contains the specified delimiter, newline or double quote, then the string value is returned enclosed in double quotes. Any double quote characters in the value are escaped with another double quote.
  • Escaped: Inserts backslashes to escape delimiter, newline, or backslash characters.

Multiple Files = drop-down

If set, multiple files will be created, each containing up to the maximum number of rows specified.


Row limit per file = integer

Maximum number of rows per file.


Header = drop-down

When Yes (default), include a header line at the top of each file with column names.


Snowflake Delta Lake on Databricks Amazon Redshift Google BigQuery Azure Synapse Analytics