Azure Document Intelligence
Editions
Production use of this feature is available for specific editions only. Contact our sales team for more information.
Azure Document Intelligence is an orchestration component that uses the Azure AI Document Intelligence API to automate the extraction of text, handwriting, layout elements, and other key data from forms and documents. The output format can be either Markdown or text.
For more information about Azure AI Document Intelligence, read What is Azure AI Document Intelligence?.
When this component runs, it uses the Azure Document Intelligence API to retrieve data to load into a table in your cloud data platform or cloud storage location—this stages the data, so the table is reloaded each time.
You can then use transformations to enrich and manage the data in permanent tables.
Prerequisites
Cloud credentials and authentication
Before you use the Azure Document Intelligence component, you'll need to add Azure cloud credentials to the Data Productivity Cloud. This requires registering an application.
You'll also need an application for your Document Intelligence service.
The cloud credentials application and the Document Intelligence applications require the following Azure roles:
- Your Azure application for cloud credentials requires the
Storage Blob Data Contributor
role in Azure. - Your Azure Document Intelligence application also requires the
Storage Blob Data Contributor
role in Azure. - Your Azure application for cloud credentials also requires the following roles to associate with the Document Intelligence service:
A system-created managed identity is required when creating a Document Intelligence application. For more information, read Managed identity assignments.
Properties
Reference material is provided below for the Configure and Destination properties.
Configure
Azure Blob Location
= string
The URL prefix of the Azure Blob storage location where your document is stored. All files found under the prefix will be matched.
Document Intelligence Service Endpoint
= string
The endpoint to your document. This endpoint is generated in Azure when you create a Document Intelligence resource. To learn more, read Create a Document Intelligence resource.
Output Format
= drop-down
Choose your preferred output format. Options include Markdown or Text.
Destination
Select your cloud data warehouse.
Destination
= drop-down
- Snowflake: Load your data into Snowflake. You'll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into your preferred cloud storage location.
Click either the Snowflake or Cloud Storage tab on this page for documentation applicable to that destination type.
Warehouse
= drop-down
The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
= drop-down
The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Table Name
= string
The name of the table to be created.
Load Strategy
= drop-down
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Stage Platform
= drop-down
Choose a data staging platform using the drop-down menu.
- Amazon S3: Stage your data on an AWS S3 bucket.
- Snowflake: Stage your data on a Snowflake internal stage.
- Azure Storage: Stage your data in an Azure Blob Storage container.
- Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.
Click one of the tabs below for documentation applicable to that staging platform.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Internal Stage Type
= drop-down
A Snowflake internal stage type. Currently, only type User is supported.
Read Choosing an Internal Stage for Local Files to learn more about internal stage types and the usage of each.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path of the written files.
File Prefix
= string (optional)
A string of characters to include at the beginning of the written files. Often used for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Destination
= drop-down
- Databricks: Load your data into Databricks. You'll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into your preferred cloud storage location.
Click either the Databricks or Cloud Storage tab on this page for documentation applicable to that destination type.
Catalog
= drop-down
Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which schema are available in the next parameter.
Schema
= drop-down
Select the Databricks schema. The special value, [Environment Default], will use the schema specified in the Data Productivity Cloud environment setup.
Table Name
= string
The name of the table to be created.
Load Strategy
= drop-down
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged Files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Stage Platform
= drop-down
Choose a data staging platform using the drop-down menu.
- Amazon S3: Stage your data on an AWS S3 bucket.
- Azure Storage: Stage your data in an Azure Blob Storage container.
Click one of the tabs below for documentation applicable to that staging platform.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path of the written files.
File Prefix
= string (optional)
A string of characters to include at the beginning of the written files. Often used for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Destination
= drop-down
- Redshift: Load your data into Amazon Redshift. You'll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into your preferred cloud storage location.
Click either the Amazon Redshift or Cloud Storage tab on this page for documentation applicable to that destination type.
Schema
= drop-down
Select the Redshift schema. The special value, [Environment Default], will use the schema defined in the environment. For information about using multiple schemas, read Schemas.
Table Name
= string
The name of the table to be created.
Load Strategy
= drop-down
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
Clean Staged Files
= boolean
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Load Strategy
= drop-down (optional)
- Append Files in Folder: Appends files to storage folder. This is the default setting.
- Overwrite Files in Folder: Overwrite existing files with matching structure.
See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:
Configuration | Description |
---|---|
Append files in folder with defined folder path and file prefix. | Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1 . |
Append files in folder without defined folder path and file prefix. | Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1 . |
Overwrite files in folder with defined folder path and file prefix. | Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten. |
Overwrite files in folder without defined folder path and file prefix. | Validation will fail. Folder path and file prefix must be supplied for this load strategy. |
Folder Path
= string (optional)
The folder path of the written files.
File Prefix
= string (optional)
A string of characters to include at the beginning of the written files. Often used for organizing database objects.
Storage
= drop-down
A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.
Click the tab that corresponds to your chosen cloud storage service.
Amazon S3 Bucket
= drop-down
An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Storage Account
= drop-down
Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.
Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Bucket
= drop-down
The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.
Overwrite
= boolean
Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
Deactivate soft delete for Azure blobs (Databricks)
If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the "Enable soft delete for blobs" setting in your Azure account for your pipeline to run successfully. To do this:
- Log in to the Azure portal.
- In the top-left, click ☰ → Storage Accounts.
- Select the intended storage account.
- In the menu, under Data management, click Data protection.
- Untick Enable soft delete for blobs. For more information, read Soft delete for blobs.
Snowflake | Databricks | Amazon Redshift |
---|---|---|
✅ | ✅ | ✅ |