Amazon Transcribe Input
Editions
Production use of this feature is available for specific editions only. Contact our sales team for more information.
Amazon Transcribe Input is an orchestration component that extracts data from media and audio files and converts that data into text.
The component uses the Amazon Transcribe API to retrieve data to load into a table—this stages the data, so the table is reloaded each time. You can then use transformations to enrich and manage the data in permanent tables. For more information, read Amazon Transcribe.
To stage data to Azure Blob Storage, the Azure credentials associated with your environment must be assigned the Storage Blob Data Contributor
role. For more information, read User assigned with the Storage Blob Data Contributor role.
Prerequisites
Before you use the Amazon Transcribe Input component, you'll need to add AWS cloud credentials to the Data Productivity Cloud.
Properties
Reference material is provided below for the Configure and Destination properties.
Configure
S3 Bucket Region
= drop-down
The AWS region where the S3 bucket you want to connect to is located.
S3 Object Prefix
= string
The S3 path to the bucket, folder, or file that will be processed. For example, s3://bucket-name/
, or s3://bucket-name/folder/
, or s3://bucket-name/folder/specific-file.pdf
.
Source File Filter Pattern
= string
A regex pattern that is used to filter files. This is useful when you select a bucket/folder in the S3 Object Prefix
parameter, and want to filter which files are processed. For example, a value of .*\.pdf$
will match all files with a .pdf
ending, or could be used to match on specific parts of a file name.
Max Speakers
= integer
The maximum number of speakers that will be detected in the audio files. For example, for a meeting recording with 5 people, enter 5
. The default is 2.
Concurrent Jobs
= string
Set how many AWS transcribe jobs can run concurrently. The default is 10. The maximum is 100.
Destination
Select your cloud data warehouse.
Destination
= drop-down
Select the destination for your data. This is either in Snowflake as a table or as files in cloud storage.
- Snowflake: Load your data into a table in Snowflake. The data must first be staged via Snowflake or a cloud storage solution.
- Cloud Storage: Load your data directly into files in your preferred cloud storage location. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.
Click either the Snowflake or Cloud Storage tab on this page for documentation applicable to that destination type.
Warehouse
= drop-down
The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
= drop-down
The Snowflake database to access. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Table Name
= string
The name of the table to be created in your Snowflake database. You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.
Load Strategy
= drop-down
Define what happens if the table name already exists in the specified Snowflake database and schema.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Stage Access Strategy
= drop-down (optional)
Select the stage access strategy. The strategies available depend on the cloud platform you select in Stage Platform.
- Credentials: Connects to the external stage (AWS, Azure) using your configured cloud provider credentials. Not available for Google Cloud Storage.
- Storage Integration: Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the Storage Integration property, through which you can select any of your existing Snowflake storage integrations.
Stage Platform
= drop-down
Use the drop-down menu to choose where the data is staged before being loaded into your Snowflake table.
- Amazon S3: Stage your data on an AWS S3 bucket.
- Snowflake: Stage your data on a Snowflake internal stage.
- Azure Storage: Stage your data in an Azure Blob Storage container.
- Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.
Click one of the tabs below for documentation applicable to that staging platform.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Internal Stage Type
= drop-down
Select the Snowflake internal stage type. Use the Snowflake links provided to learn more about each type of stage.
- User: Each Snowflake user has a user stage allocated to them by default for file storage. You may find the user stage convenient if your files will only be accessed by a single user, but need to be copied into multiple tables.
- Named: A named stage provides high flexibility for data loading. Users with the appropriate privileges on the stage can load data into any table. Furthermore, because the stage is a database object, any security or access rules that apply to all objects will apply to the named stage.
Named stages can be altered and dropped. User stages cannot.
Named Stage
= drop-down
Select your named stage. Read Creating a named stage to learn how to create a new named stage.
Warning
There is a known issue where named stages that include special characters or spaces are not supported.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
Stage Access Strategy
= drop-down (optional)
Select the stage access strategy. The strategies available depend on the cloud platform you select in Stage Platform.
- Storage Integration: Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the Storage Integration property, through which you can select any of your existing Snowflake storage integrations.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path for the files to be written to. Note that this path follows, but does not include, the bucket or container name.
File Prefix
= string (optional)
A string of characters that precedes the name of the written files. This can be useful for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into files for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Destination
= drop-down
Select the destination for your data. This is either in Databricks as a table or as files in cloud storage. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.
- Databricks: Load your data into Databricks. You'll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into files in your preferred cloud storage location.
Click either the Databricks or Cloud Storage tab on this page for documentation applicable to that destination type.
Catalog
= drop-down
Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which schema are available in the next parameter.
Schema
= drop-down
Select the Databricks schema. The special value, [Environment Default], will use the schema specified in the Data Productivity Cloud environment setup.
Table Name
= string
The name of the table to be created in your Databricks schema. You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.
Load Strategy
= drop-down
Define what happens if the table name already exists in the specified Databricks schema.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged Files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Stage Platform
= drop-down
Use the drop-down menu to choose where the data is staged before being loaded into your Databricks table.
- Amazon S3: Stage your data on an AWS S3 bucket.
- Azure Storage: Stage your data in an Azure Blob Storage container.
Click one of the tabs below for documentation applicable to that staging platform.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path for the files to be written to. Note that this path follows, but does not include, the bucket or container name.
File Prefix
= string (optional)
A string of characters that precedes the name of the written files. This can be useful for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Destination
= drop-down
Select the destination for your data. This is either in Amazon Redshift as a table or as files in cloud storage. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.
- Redshift: Load your data into Amazon Redshift. You'll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into files in your preferred cloud storage location.
Click either the Amazon Redshift or Cloud Storage tab on this page for documentation applicable to that destination type.
Schema
= drop-down
Select the Amazon Redshift schema that will contain your table. The special value, [Environment Default], will use the schema defined in the environment. For information about using multiple schemas, read Schemas.
Table Name
= string
The name of the table to be created in your Amazon Redshift database. You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.
Load Strategy
= drop-down
Define what happens if the table name already exists in the specified Amazon Redshift database and schema.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged Files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into before it is loaded into your Amazon Redshift table. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path for the files to be written to. Note that this path follows, but does not include, the bucket or container name.
File Prefix
= string (optional)
A string of characters to include at the beginning of the written files. Often used for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Deactivate soft delete for Azure blobs (Databricks)
If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the "Enable soft delete for blobs" setting in your Azure account for your pipeline to run successfully. To do this:
- Log in to the Azure portal.
- In the top-left, click ☰ → Storage Accounts.
- Select the intended storage account.
- In the menu, under Data management, click Data protection.
- Untick Enable soft delete for blobs. For more information, read Soft delete for blobs.
Snowflake | Databricks | Amazon Redshift |
---|---|---|
✅ | ✅ | ✅ |