Skip to content

Run Notebook

Use the Run Notebook component to execute a Databricks notebook from an orchestration pipeline. You might find this component useful if you wish to conflate ETL/ELT tasks with the subsequent analysis performed in your Databricks notebooks.

Note

This component supports the use of pipeline and project variables. For more information, read Variables.


Properties

Name = string

A human-readable name for the component.


Cluster Name = drop-down

Select the Databricks cluster. The special value, [Environment Default], will use the cluster defined in the active environment.


Notebook Path = string

The path to a Databricks notebook. Search via a filepath string or browse the list of directories and notebooks based on the connected Workspace in the environment. Read Copy notebook path (AWS) or Copy notebook path (Azure) to learn how to retrieve your notebook file path.


Execution Mode = drop-down

  • Asynchronous: Runs the task and sends a request to Databricks. The status of the task is ignored in terms of continuing the pipeline.
  • Synchronous: Runs the task and polls Databricks for a status update. The pipeline is delayed until Databricks has returned a status of TERMINATED. The results of which include SUCCESS, FAILED, TIMEOUT, or CANCELLED.

The default is asynchronous.


Snowflake Databricks Amazon Redshift (preview)