Jira
This page describes how to configure a Jira data source. With Data Loader, you can replicate and load your source data into your target destination.
Schema Drift Support: Yes. Read Schema Drift to learn more.
Return to any page of this wizard by clicking Previous.
Click X in the upper-right of the UI and then click Yes, discard to close the pipeline creation wizard.
Prerequisites
- Read the Allowed IP addresses topic before you begin. You may not be able to connect to certain data sources without first allowing the Batch IP addresses. In these circumstances, connection tests will always fail and you will not be able to complete the pipeline.
- You must be an administrator of the Jira Cloud instance.
- You must be either a technical contact or billing contact of the Jira Cloud instance. To verify, log in to My Atlassian and confirm that the chosen Jira Cloud instance appears in the Licences page.
Create pipeline
- In Data Loader, click Add pipeline.
- Choose Jira from the grid of data sources.
- Choose Batch Loading.
Connect to Jira
Configure the Jira database connection settings, specifying the following:
Property | Description |
---|---|
Connection URL | Your company connection URL to Jira. For example, https://company-name.atlassian.net/ |
Username | Your Jira username. For example, the email you use to log in to Jira. |
API Key | A managed entry representing your Jira API Key. Choose an existing entry from the dropdown menu or click Manage and then click Add new credential to configure a new managed API Key entry. Give the entry a label, which is what you can see in the dropdown menu for this parameter, and then input the value of the API Key. Read Manage Passwords to learn more. |
Advanced settings | Additional JDBC parameters or connection settings. Click Advanced settings and then choose a parameter from the dropdown menu and enter a value for the parameter. Click Add parameter for each extra parameter you want to add. For a list of compatible connection properties, read Allowed connection properties. |
Click Test and Continue to test your settings and move forward. You can't continue if the test fails for any reason.
If you encounter an error where zeroed dates are causing pipelines to fail with an error such as below:
Value '0000-00-00' can't be represented as java.sql.Date
or:
Value '0000-00-00 00:00:00' can't be represented as java.sql.Timestamp
You can solve this error by selecting the zeroDateTimeBehaviour
parameter and assigning a value of convertToNull
.
Choose tables
Choose any tables you wish to include in the pipeline. Use the arrow buttons to move tables to the Tables to extract and load listbox and then reorder any tables with click-and-drag. Additionally, select multiple tables using the SHIFT
key.
Click Continue with X tables to move forward.
Review your data set
Choose the columns from each table to include in the pipeline. By default, Data Loader selects all columns from a table.
Click Configure on a table to open Configure table. This dialog lists columns in a table and the data type of each column. Additionally, you can set a primary key and assign an incremental column state to a column.
Use the arrow buttons to move columns out of the Columns to extract and load listbox. Order columns with click-and-drag. Select multiple columns using SHIFT
.
To continue and then click Done.
Click Continue once you have configured each table.
Choose destination
- Choose an existing destination or click Add a new destination.
- Select a destination from Snowflake, Amazon Redshift, or Google BigQuery.
Set frequency
Property | Description |
---|---|
Pipeline name | A descriptive label for your pipeline. This is how the pipeline appears on the pipeline dashboard and how Data Loader refers to the pipeline. |
Sync every | The frequency at which the pipeline should sync. Day values include 1—7. Hour values include 1—23. Minute values include 5—59. The input is also the length of delay before the first sync. |
Currently, you can't specify a start time.
Once you are happy with your pipeline configuration, click Create pipeline to complete the process and add the pipeline to your dashboard.