Recurly Extract
Overview
The Recurly Extract component calls the Recurly API to retrieve and store data to be either referenced by an external table or loaded into a table, depending on your cloud data warehouse. You can then use transformation components to enrich and manage the data in permanent tables.
Using this component may return structured data that requires flattening. For help with flattening such data, we recommend using the following components:
- Extract Nested Data for Snowflake or Google BigQuery.
- Nested Data Load for Amazon Redshift.
Properties
Name
= string
A human-readable name for the component.
Data Source
= drop-down
Select the Recurly data source from the available options.
API Key
= string
A valid Recurly API key. To acquire an API key, read Recurly Extract authentication guide. Store the API key in the component, or create a managed entry for the API key using Manage Passwords (recommended).
Subdomain
= string
The subdomain of your Recurly account, for example: https://{subdomain}.recurly.com/v2/accounts
.
Begin Time
= datetime
Operates on the attributes specified by the Sort property. Filters data records to only include those with datetimes less than or equal to the supplied datetime. Accepts an ISO 8601 date or date and time.
End Time
= datetime
Operates on the attributes specified by the Sort property. Filters data records to only include those with datetimes less than or equal to the supplied datetime. Accepts an ISO 8601 date or date and time.
Page Limit
= integer
Integer value for the limit of pages to stage.
Location
= filepath
Provide an Amazon S3 bucket path, Google Cloud Storage (GCS) bucket path, or Azure Blob Storage path that will be used to store the data. The data can then be referenced by an external table. A folder will be created at this location with the same name as the target table.
Integration
= drop-down
(GCP only) Choose your Google Cloud Storage Integration. Integrations are required to permit Snowflake to read data from and write to a Google Cloud Storage bucket. Integrations must be set up in advance of selecting them in Matillion ETL. To learn more about setting up a storage integration, read our Storage Integration setup guide.
Warehouse
= drop-down
The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
= drop-down
The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Target Table
= string
A name for the new table. Upon running the job, this table will be recreated and will drop any existing table of the same name.
Name
= string
A human-readable name for the component.
Data Source
= drop-down
Select the Recurly data source from the available options.
API Key
= string
A valid Recurly API key. To acquire an API key, read Recurly Extract authentication guide. Store the API key in the component, or create a managed entry for the API key using Manage Passwords (recommended).
Subdomain
= string
The subdomain of your Recurly account, for example: https://{subdomain}.recurly.com/v2/accounts
.
Begin Time
= datetime
Operates on the attributes specified by the Sort property. Filters data records to only include those with datetimes less than or equal to the supplied datetime. Accepts an ISO 8601 date or date and time.
End Time
= datetime
Operates on the attributes specified by the Sort property. Filters data records to only include those with datetimes less than or equal to the supplied datetime. Accepts an ISO 8601 date or date and time.
Page Limit
= integer
Integer value for the limit of pages to stage.
Location
= filepath
Provide an Amazon S3 bucket path that will be used to store the data. The data can then be referenced by an external table. A folder will be created at this location with the same name as the target table.
Type
= drop-down
- External: The data will be put into your chosen S3 bucket and referenced by an external table.
- Standard: The data will be staged on your chosen S3 bucket before being loaded into a table. This is the default setting.
Standard Schema
= drop-down
The Redshift schema. The special value, [Environment Default], will use the schema defined in the Matillion ETL environment.
External Schema
= drop-down
The table's external schema. Read Getting Started with Amazon Redshift Spectrum to learn more.
Target Table
= string
A name for the new table. Upon running the job, this table will be recreated and will drop any existing table of the same name.
Name
= string
A human-readable name for the component.
Data Source
= drop-down
Select the Recurly data source from the available options.
API Key
= string
A valid Recurly API key. To acquire an API key, read Recurly Extract authentication guide. Store the API key in the component, or create a managed entry for the API key using Manage Passwords (recommended).
Subdomain
= string
The subdomain of your Recurly account, for example: https://{subdomain}.recurly.com/v2/accounts
.
Begin Time
= datetime
Operates on the attributes specified by the Sort property. Filters data records to only include those with datetimes less than or equal to the supplied datetime. Accepts an ISO 8601 date or date and time.
End Time
= datetime
Operates on the attributes specified by the Sort property. Filters data records to only include those with datetimes less than or equal to the supplied datetime. Accepts an ISO 8601 date or date and time.
Page Limit
= integer
Integer value for the limit of pages to stage.
Table Type
= drop-down
Select whether the table is Native (by default in BigQuery) or an external table.
Project
= drop-down
Select the Google Cloud project. The special value, [Environment Default], will use the project defined in the environment. For more information, read Creating and managing projects.
Dataset
= drop-down
Select the Google BigQuery dataset to load data into. The special value, [Environment Default], will use the dataset defined in the environment. For more information, read Introduction to datasets.
Target Table
= string
A name for the new table. Upon running the job, this table will be recreated and will drop any existing table of the same name.
New Target Table
= string
A name for the new external table. Only available when the table type is External.
Cloud Storage Staging Area
= Google Cloud Storage bucket
The URL and path of the target Google Cloud Storage bucket to be used for staging the queried data. Only available when the table type is Native.
Location
= Google Cloud Storage bucket
The URL and path of the target Google Cloud Storage bucket. Only available when the table type is External.
Load Options
= multiple drop-downs
- Clean Cloud Storage Files: Destroy staged files on Google Cloud Storage after loading data. Default is On.
- Cloud Storage File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field.
- Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On.
- Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Data source properties
See below for properties relevant to specific data sources.
accounts
State
= drop-down
The state of the accounts. Select one of "active", "closed", "non_subscriber", "past_due", or "subscriber".
coupon_redemptions_for_account
Account Code
= string
A unique identifier used by Matillion ETL to identify the Recurly account. This string may contain the following characters: a-z 0-9 @-_.
For more information, refer to the Recurly documentation.
coupon_redemptions_for_invoice
Invoice code
= integer
The number of the invoice.
coupon_redemptions_for_subscription
Subscription Code
= string
The code used to identify the subscription. This string may contain the following characters: a-z 0-9 @-_.
coupons
State
= drop-down
The state of the accounts. Select one of "expired", "maxed_out", or "redeemable".
invoices
State
= drop-down
The state of the accounts. Select one of "closed", "failed", "open", "paid", "past_due", "pending", "processing", or "voided".
plan_add_ons
Plan Code
= string
The unique code used to identify the specific plan.
State
= drop-down
The state of the accounts. Select one of "active", "canceled", "expired", "future", "in_trial", "live", or "past_due".
subscriptions
State
= drop-down
The state of the accounts. Select one of "active", "canceled", "expired", "future", "in_trial", "live", or "past_due".
subscriptions_for_account
Account Code
= string
A unique identifier used by Matillion ETL to identify the Recurly account. This string may contain the following characters: a-z 0-9 @-_.
For more information, refer to the Recurly documentation.
State
= drop-down
The state of the accounts. Select one of "active", "canceled", "expired", "future", "in_trial", "live", or "past_due".
transactions
State
= drop-down
The state of the accounts. Select one of "failed", "successful", or "voided".
Type
= drop-down
The transaction type. Select one of "authorization", "purchase", or "refund"
transaction_for_account
Account Code
= string
A unique identifier used by Matillion ETL to identify the Recurly account. This string may contain the following characters: a-z 0-9 @-_.
For more information, refer to the Recurly documentation.
State
= drop-down
The state of the accounts. Select one of "chargeback", "failed", "successful", or "voided".
Snowflake | Delta Lake on Databricks | Amazon Redshift | Google BigQuery | Azure Synapse Analytics |
---|---|---|---|---|
✅ | ❌ | ✅ | ✅ | ❌ |