Skip to content

Kafka

The Kafka component is an orchestration component that lets you load data from Kafka topics into either schemaless JSON or JSON schema objects. New to Kafka? Read Apache Kafka's Introduction.

This component has multiple destination options. You can choose to load your data into your cloud data warehouse (Snowflake, Databricks, Amazon Redshift) to then enrich your data in a transformation pipeline, or simply load the data into your chosen cloud storage destination (Amazon S3, Azure Blob Storage, Google Cloud Storage). If you choose a cloud data warehouse as your destination, you can also choose where to stage your data—your cloud data warehouse or one of the cloud storage destinations.

We recommend that you make use of the Apache Kafka documentation when using this component.

If the component requires access to a cloud provider, it will use the cloud credentials associated with your environment to access resources.


Properties

Reference material is provided below for the Connect, Configure, Destination, and Advanced Settings properties.

Connect

Authentication Type = drop-down

Specify the authentication mechanism to use for connecting to the Kafka broker.

  • None: No authentication.
  • Username & Password: Authenticate with a username and password (basic authentication). This uses the Simple Authentication and Security Layer (SASL) protocol. In your Kafka configuration file (server.properties), you must set the security.inter.broker.protocol to support SASL. Read SASL configuration to learn more.
  • OAuth 2.0 Client Credentials: Authenticate using OAuth 2.0 authorization. You will need an OAuth 2.0 authorization server (such as Auth0, Okta, or Keycloak) to issue access tokens.

If using OAuth 2.0 Client Credentials, you must edit your server.properties file to ensure the server (broker) accepts SASL/OAUTHBEARER tokens. For example:

listeners=SASL_SSL://:9092
sasl.enabled.mechanisms=OAUTHBEARER

Username = string

The username value defined in your jaas.conf (Java Authentication and Authorization Service) file. You can define as many users as required in this file. For example:

KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="myUser"
    password="myPassword";
};

To set up additional Kafka users, edit your jaas.conf file accordingly. For example, this is a configuration that defines additional users (producer, consumer, registry):

KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
 serviceName="kafka"
 username="admin"
 password="admin-secret"
 user_producer="producer-secret"
 user_consumer="consumer-secret"
 user_registry="registry-secret";
};

Password = drop-down

Use the drop-down menu to select the corresponding secret definition that denotes the value of the password tied to your username in your jaas.conf (Java Authentication and Authorization Service) file. For example:

KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="myUser"
    password="myPassword";
};

Read Secret definitions to learn how to create a new secret definition.


Authentication = drop-down

Use the drop-down menu to select an OAuth connection to Kafka or Kafka Confluent Cloud.

Read Kafka authentication guide to learn how to obtain the credentials to create a Kafka or Kafka Confluent Cloud OAuth connection.


Bootstrap Servers = string list

Set addresses for each bootstrap server (also known as a broker) to connect to when commencing communication with a Kafka cluster. Multiple bootstrap servers will be defined as a comma-separated list. For example, myBroker1:9092, myBroker2:9092, myBroker3:9092. If a failure occurs, Kafka will attempt to connect to the next broker in the list.


Encryption = drop-down

  • None: No encryption.
  • TLS: Use a TLS certificate (in .pem format) to secure communication to the Kafka cluster. To use TLS, make sure you update your Kafka configuration (server.properties) on the server side as well as client properties to enable SSL as the communication protocol. For example, security.protocol=SASL_SSL. This setting is recommend if you're using basic authentication (username and password) to authenticate.
Security protocol Description
PLAINTEXT Un-authenticated, non-encrypted channel
SSL SSL channel
SASL_PLAINTEXT SASL authenticated, non-encrypted channel
SASL_SSL SASL authenticated, SSL channel

Note

You can enable TLS without passing a certificate. In such cases, the security protocol will be set to the SSL equivalent of the authentication type chosen. If you do pass a certificate, the trust store certificate and trust store type will be configured.


Trusted Certificate = text editor optional

Add your TLS certificates into the text editor. Supports x.509 certificates in .pem format.

This certificate should be the trusted certificate of the broker.

Note

This parameter is optional. It's possible to set Encryption to TLS and not provide a certificate. This works in instances where the broker's certificate is trusted by a well-known certificate authority (CA).


Topic = drop-down

Once you have authenticated with your Kafka configuration, this drop-down will display any available Kafka topics in your defined cluster. Topics are high-level categories for data in Kafka, used as the storage mechanism of Kafka messages (events), similar to a folder storing files in a filesystem. Read Introduction to learn more.

Note

Internal topics have been filtered out and won't appear in the drop-down list.


Consumer Group = string

ID of a consumer group, usually defined by the app or client consuming data from Kafka. A consumer group is set as a group.id. For example, in a properties file such as consumer.properties:

bootstrap.servers=myBroker1:9092
group.id=my-consumer-group
enable.auto.commit=true
key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

The consumer group may also be defined in code, as in Java:

Properties props = new Properties();
props.put("bootstrap.servers", "myBroker1:9092");
props.put("group.id", "my-consumer-group");

Generate Unique Consumer Group = boolean

When set to Yes, a unique ID is appended to the value specified in Consumer Group for each run of this connector.

The table below highlights different ways you might use the Kafka connector, with advice on how to set the Generate Unique Consumer Group property.

Use case Method Generate Unique Consumer Group?
Single connector, full load On each execution of the connector, a new consumer group will be initiated. The read operation will execute from the beginning of a topic. Yes
Multiple connectors, full load For users who wish to run multiple Kafka connectors in parallel. You will need a unique consumer group per pipeline execution. You can achieve this using variables created at the pipeline level passed into each Kafka connector's Consumer Group property. This operation will execute all Kafka connectors in the pipeline as parts of the same consumer group, meaning the full load will be executed in parallel. No
Kafka-native incremental load For users who wish to use Kafka's internal incremental load capabilities. Use a static consumer group name. On each execution of the Kafka connector, it will remain part of the same consumer group. This will expose Kafka's internal offset management. The broker will track the offset. When the connector is executed again, it will begin reading from where it finished on the last execution. No

Warning

If you opt for the parallel load or incremental load use cases mentioned above, you should ensure you have a pipeline that executes on failure, where a user can set the Generate Unique Consumer Group to Yes, ensuring that a full load of the topic is executed successfully. We have created pipelines for this, which you can download:

The purpose of these pipelines is to circumvent issues that could occur if the pipeline fails after the read operation. In such instances, data could be lost because the consumer group would still read from its last read position, not where the pipeline failed.


Configure

Data Format = drop-down

Set the data format of messages in your topics.

  • Schemaless JSON: Messages have no predefined structure, meaning messages may vary. While schemaless JSON can offer simplicity and flexibility, discrepancies and missing fields require more manual intervention.
  • JSON schema: Provides validation, compatibility control, and consistency of messages. You may deem this format more suitable to production-grade applications where data integrity is vital.

Schema Registry URL = string

The URL to the schema registry used by the Kafka cluster. This may be specified in your client configuration file, or directly in an application's code. If you're using Confluent Cloud, use the Confluent Cloud UI. Read Quick Start for Schema Management on Confluent Cloud to learn more.

Only available when Data Format is set to a structured type. Currently, the only supported structured data format is JSON Schema.


Authentication Type = drop-down

Choose to authenticate with a username and password or not to authenticate (None).

Only available when Data Format is set to a structured type, such as JSON schema.


Schema Registry Username = string

The username specified in your client app that connects to the schema registry. For example:

schema.registry.url=https://your-schema-registry-url
basic.auth.credentials.source=USER_INFO
schema.registry.basic.auth.user.info=my-username:my-password

Only available when Data Format is set to JSON Schema and Authentication Type is set to Username & Password.


Schema Registry Password = drop-down

Use the drop-down menu to select the corresponding secret definition that denotes the value of the password tied to your username and schema registry. For example:

schema.registry.url=https://your-schema-registry-url
basic.auth.credentials.source=USER_INFO
schema.registry.basic.auth.user.info=my-username:my-password

Read Secret definitions to learn how to create a new secret definition.

Only available when Data Format is set to JSON Schema and Authentication Type is set to Username & Password.


Schema Registry Encryption = drop-down

Determines whether the Schema Registry Trusted Certificate parameter is displayed or hidden.

  • None: No encryption.
  • TLS: Use a TLS certificate (in .pem format). Encryption is based on the protocol specified in Schema Registry URL. If the URL uses HTTPS (i.e. begins with https://...) then TLS will be enabled. Otherwise, traffic will be in plaintext.

Only available when Data Format is set to JSON Schema.


Schema Registry Trusted Certificate = editor optional

Add your TLS certificates into the text editor. Supports x.509 certificates in .pem format.

Only available when Data Format is set to JSON Schema and Schema Registry Encryption is set to TLS.


Destination

Select your cloud data warehouse.

Destination = drop-down

  • Snowflake: Load your data into Snowflake. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Snowflake or Cloud Storage tab on this page for documentation applicable to that destination type.

Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
  • Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.

Clean Staged files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Stage Platform = drop-down

Choose a data staging platform using the drop-down menu.

  • Amazon S3: Stage your data on an AWS S3 bucket.
  • Snowflake: Stage your data on a Snowflake internal stage.
  • Azure Storage: Stage your data in an Azure Blob Storage container.
  • Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.

Click one of the tabs below for documentation applicable to that staging platform.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Internal Stage Type = drop-down

A Snowflake internal stage type. Currently, only type User is supported.

Read Choosing an Internal Stage for Local Files to learn more about internal stage types and the usage of each.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

Storage Integration= drop-down

Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.


GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Overwrite = boolean

Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.

Load Strategy = drop-down (optional)

  • Append Files in Folder: Appends files to storage folder. This is the default setting.
  • Overwrite Files in Folder: Overwrite existing files with matching structure.

See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

Configuration Description
Append files in folder with defined folder path and file prefix. Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1.
Append files in folder without defined folder path and file prefix. Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1.
Overwrite files in folder with defined folder path and file prefix. Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten.
Overwrite files in folder without defined folder path and file prefix. Validation will fail. Folder path and file prefix must be supplied for this load strategy.

Folder Path = string (optional)

The folder path of the written files.


File Prefix = string (optional)

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Overwrite = boolean

Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.

Destination = drop-down

  • Databricks: Load your data into Databricks. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Databricks or Cloud Storage tab on this page for documentation applicable to that destination type.

Catalog = drop-down

Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which schema are available in the next parameter.


Schema = drop-down

Select the Databricks schema. The special value, [Environment Default], will use the schema specified in the Data Productivity Cloud environment setup.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.

Clean Staged Files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Stage Platform = drop-down

Choose a data staging platform using the drop-down menu.

  • Amazon S3: Stage your data on an AWS S3 bucket.
  • Azure Storage: Stage your data in an Azure Blob Storage container.

Click one of the tabs below for documentation applicable to that staging platform.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

Storage Integration= drop-down

Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.


GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Overwrite = boolean

Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.

Load Strategy = drop-down (optional)

  • Append Files in Folder: Appends files to storage folder. This is the default setting.
  • Overwrite Files in Folder: Overwrite existing files with matching structure.

See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

Configuration Description
Append files in folder with defined folder path and file prefix. Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1.
Append files in folder without defined folder path and file prefix. Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1.
Overwrite files in folder with defined folder path and file prefix. Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten.
Overwrite files in folder without defined folder path and file prefix. Validation will fail. Folder path and file prefix must be supplied for this load strategy.

Folder Path = string (optional)

The folder path of the written files.


File Prefix = string (optional)

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Overwrite = boolean

Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.

Destination = drop-down

  • Redshift: Load your data into Amazon Redshift. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Amazon Redshift or Cloud Storage tab on this page for documentation applicable to that destination type.

Schema = drop-down

Select the Redshift schema. The special value, [Environment Default], will use the schema defined in the environment. For information about using multiple schemas, read Schemas.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.

Clean Staged Files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Load Strategy = drop-down (optional)

  • Append Files in Folder: Appends files to storage folder. This is the default setting.
  • Overwrite Files in Folder: Overwrite existing files with matching structure.

See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

Configuration Description
Append files in folder with defined folder path and file prefix. Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1.
Append files in folder without defined folder path and file prefix. Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1.
Overwrite files in folder with defined folder path and file prefix. Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten.
Overwrite files in folder without defined folder path and file prefix. Validation will fail. Folder path and file prefix must be supplied for this load strategy.

Folder Path = string (optional)

The folder path of the written files.


File Prefix = string (optional)

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Overwrite = boolean

Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.


Advanced Settings

Poll Timeout = integer

The maximum amount of time in milliseconds that a consumer will await new messages when calling the poll() method if none are currently available. If messages are available, they are retrieved immediately. By setting a poll timeout, you can prevent consumers from continuously looping and consuming CPU resources.

The default is 5000.


Consumer Property Overrides = column editor

Modify or customize specific settings in your Kafka consumer configurations at runtime. You may wish to make use of this setting if you have default consumer setups but want to adjust certain properties for particular consumers without altering any global or shared configurations.

  • Property: The property whose value you wish to override. For example max.poll.records.
  • Value: The override value of the corresponding property. For example 50. This would limit how many records are returned on each poll to 50. You might wish to do this for performance optimization.

The following consumer properties are prohibited from being overriden for security reasons and/or performance concerns:

  • interceptor.classes
  • key.deserializer
  • metric.reporters
  • sasl.client.callback.handler.class
  • sasl.jaas.config
  • sasl.kerberos.kinit.cmd
  • sasl.kerberos.min.time.before.relogin
  • sasl.kerberos.service.name
  • sasl.kerberos.ticket.renew.jitter
  • sasl.kerberos.ticket.renew.window.factor
  • sasl.login.callback.handler.class
  • sasl.login.class
  • sasl.login.connect.timeout.ms
  • sasl.login.read.timeout.ms
  • sasl.login.refresh.buffer.seconds
  • sasl.login.refresh.min.period.seconds
  • sasl.login.refresh.window.factor
  • sasl.login.refresh.window.jitter
  • sasl.login.retry.backoff.max.ms
  • sasl.login.retry.backoff.ms
  • sasl.mechanism
  • sasl.oauthbearer.clock.skew.seconds
  • sasl.oauthbearer.expected.audience
  • sasl.oauthbearer.expected.issuer
  • sasl.oauthbearer.jwks.endpoint.refresh.ms
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms
  • sasl.oauthbearer.jwks.endpoint.url
  • sasl.oauthbearer.scope.claim.name
  • sasl.oauthbearer.sub.claim.name
  • sasl.oauthbearer.token.endpoint.url
  • security.protocol
  • security.providers
  • ssl.cipher.suites
  • ssl.enabled.protocols
  • ssl.endpoint.identification.algorithm
  • ssl.engine.factory.class
  • ssl.key.password
  • ssl.keymanager.algorithm
  • ssl.keystore.certificate.chain
  • ssl.keystore.key
  • ssl.keystore.location
  • ssl.keystore.password
  • ssl.keystore.type
  • ssl.protocol
  • ssl.provider
  • ssl.secure.random.implementation
  • ssl.trustmanager.algorithm
  • ssl.truststore.certificates
  • ssl.truststore.location
  • ssl.truststore.password
  • ssl.truststore.type
  • value.deserializer

Deactivate soft delete for Azure blobs (Databricks)

If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the "Enable soft delete for blobs" setting in your Azure account for your pipeline to run successfully. To do this:

  1. Log in to the Azure portal.
  2. In the top-left, click Storage Accounts.
  3. Select the intended storage account.
  4. In the menu, under Data management, click Data protection.
  5. Untick Enable soft delete for blobs. For more information, read Soft delete for blobs.

Snowflake Databricks Amazon Redshift