Skip to content

Cortex Translate

Cortex Translate is a transformation component that uses Snowflake Cortex to translate the input text of one or more columns from one supported language to another—for example, from English to German. When a column is included, all rows in that column will be translated.

For a list of supported languages, read the Usage notes.

You must use a Snowflake role that has been granted the SNOWFLAKE.CORTEX_USER database role. Read Required Privileges to learn more about granting this privilege.

To learn more about Snowflake Cortex, such as availability, usage quotas, managing costs, and more, visit Large Language Model (LLM) Functions (Snowflake Cortex).


Properties

Name = string

A human-readable name for the component.


Columns = column editor

  • Input Column: Select a column from your input table. The text in each row of this column will be translated.
  • Source Language: Select the source language that the input text begins as.
  • Target Language: Select the target language to translate the input text to.

To translate more than one input column, click +.


Include Input Columns = boolean

  • Yes: Includes both your source language input columns and the new target language translation columns. This will also include those input columns not selected in Columns.
  • No: Only includes the new target language translation columns.

Example

A company has hired a support engineer, Klaus, whose first language is German. Klaus's English is passable, but he can respond to support queries faster if he can read the queries in German. Klaus's latest support cases look like this:

Input data
 _______________________________________________________________________
| DATE       | SUPPORT CASE                                 | NAME      |
|------------|----------------------------------------------|-----------|
| 2023-01-10 | Unable to access data sources.               | Alice     |
| 2023-02-05 | Need help with query optimization.           | Bob       |
| 2023-03-20 | Issue with database schema update.           | Charlie   |
| 2023-04-15 | How to automate data ingestion?              | Dana      |
| 2023-05-01 | Data validation failing on new records.      | Eve       |
| 2023-06-25 | Need guidance on ETL tool selection.         | Frank     |
| 2023-07-10 | Client reporting login issues.               | Grace     |
| 2023-08-05 | Data pipeline running slow.                  | Henry     |
| 2023-09-20 | How to handle large data volumes?            | Iris      |
| 2023-10-15 | Error connecting to external API.            | Jack      |
| 2023-11-30 | Need assistance with ETL scheduling.         | Kate      |
| 2023-12-25 | Best practices for data integration.         | Liam      |
| 2024-01-10 | How to handle real-time data streams?        | Mia       |
| 2024-02-05 | Data quality issues in reports.              | Noah      |
| 2024-03-20 | Performance degradation in ETL jobs.         | Olivia    |
| 2024-04-15 | How to implement data masking?               | Peter     |
|____________|______________________________________________|___________|

Klaus can use the Cortex Translate component to translate the SUPPORT CASE column from English to German, making it easier for him to understand and respond to support cases.

Translate chosen columns:

Cortex Translate component properties:

  • Columns
    • Input Column: SUPPORT CASE
    • Source Language: English
    • Target Language: German
  • Include Input Columns: YES

By setting Include Input Columns to YES, the original columns from the table will be kept as part of the pipeline run, and the target language column (in this case, de_SUPPORT CASE) is appended to the end of the table.

Output data
 ________________________________________________________________________________________________________________________________
| DATE       | SUPPORT CASE                                 | NAME      |de_SUPPORT CASE                                         |
|------------|----------------------------------------------|-----------|--------------------------------------------------------|
| 2023-01-10 | Unable to access data sources.               | Alice     |Es ist nicht möglich, auf Datenquellen zuzugreifen.     |
| 2023-02-05 | Need help with query optimization.           | Bob       |Benötigen Sie Hilfe bei der Abfragezechnung.            |
| 2023-03-20 | Issue with database schema update.           | Charlie   |Problem mit der Datenbankschema-Update.                 |
| 2023-04-15 | How to automate data ingestion?              | Dana      |Wie automatisiert man die Datenerfassung?               |
| 2023-05-01 | Data validation failing on new records.      | Eve       |Bei neuen Datensätzen fehlende Datenvalidierung.        |
| 2023-06-25 | Need guidance on ETL tool selection.         | Frank     |Benötigen Sie Anleitung zur Auswahl des ETL-Tools.      |
| 2023-07-10 | Client reporting login issues.               | Grace     |Probleme mit der Kundenmeldung.                         |
| 2023-08-05 | Data pipeline running slow.                  | Henry     |Die Datenpipeline läuft langsam.                        |
| 2023-09-20 | How to handle large data volumes?            | Iris      |Wie man mit großen Datenmengen umgeht?                  |
| 2023-10-15 | Error connecting to external API.            | Jack      |Fehler bei der Verbindung mit der externen API.         |
| 2023-11-30 | Need assistance with ETL scheduling.         | Kate      |Benötigen Sie Hilfe bei der ETL-Planung.                |
| 2023-12-25 | Best practices for data integration.         | Liam      |Benötigen Sie Hilfe bei der ETL-Planung.                |
| 2024-01-10 | How to handle real-time data streams?        | Mia       |Wie verarbeitet man Echtzeit-Datenströme?               |
| 2024-02-05 | Data quality issues in reports.              | Noah      |Datenqualitätsprobleme in Berichten.                    |
| 2024-03-20 | Performance degradation in ETL jobs.         | Olivia    |Leistungsabbau bei ETL-Jobs.                            |
| 2024-04-15 | How to implement data masking?               | Peter     |Wie implementiert man die Datenmaskierung?              |
|____________|______________________________________________|___________|________________________________________________________|

Snowflake Databricks Amazon Redshift