Pinecone Vector Upsert
Editions
Production use of this feature is available for specific editions only. Contact our sales team for more information.
The Pinecone Vector Upsert component lets you convert data stored in your cloud data warehouse into embeddings and then store these embeddings as vectors in your Pinecone vector database.
If the component requires access to a cloud provider, it will use credentials as follows:
- If using Matillion Full SaaS: The component will use the cloud credentials associated with your environment to access resources.
- If using Hybrid SaaS: By default the component will inherit the agent's execution role (service account role). However, if there are cloud credentials associated to your environment, these will overwrite the role.
Data freshness
According to Pinecone's documentation:
Pinecone is eventually consistent, so there can be a slight delay before new or changed records are visible to queries.
Keep this in mind for instances when running query operations shortly after upsert operations.
Properties
Name
= string
A human-readable name for the component.
Select your cloud data warehouse.
Database
= drop-down
The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Table
= string
The Snowflake table that holds your source data.
Catalog
= drop-down
Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which databases are available in the next parameter.
Schema (Database)
= drop-down
The Databricks schema. The special value, [Environment Default], will use the schema defined in the environment. Read Create and manage schemas to learn more.
Table
= drop-down
The Databricks table that holds your source data.
Schema
= drop-down
The Redshift schema. The special value, [Environment Default], will use the schema defined in the environment. Read Schemas to learn more.
Table
= drop-down
An existing Redshift table to use as the input.
Key Column
= drop-down
Set a column as the primary key.
Text Column
= drop-down
The column of data to convert into embeddings to then be upserted into your Pinecone vector database.
Limit
= integer
Set a limit for the numbers of rows from the table to load. The default is 1000.
Embedding Provider
= drop-down
The embedding provider is the API service used to convert the search term into a vector. Choose either OpenAI or Amazon Bedrock. The embedding provider receives a search term (e.g. "How do I log in?") and returns a vector.
Choose your provider:
OpenAI API Key
= drop-down
Use the drop-down menu to select the corresponding secret definition that denotes the value of your OpenAI API key.
Read Secret definitions to learn how to create a new secret definition.
To create a new OpenAI API key:
- Log in to OpenAI.
- Click your avatar in the top-right of the UI.
- Click View API keys.
- Click + Create new secret key.
- Give a name for your new secret key and click Create secret key.
- Copy your new secret key and save it. Then click Done.
Embedding Model
= drop-down
Select an embedding model.
Currently supports:
Model | Dimension |
---|---|
text-embedding-ada-002 | 1536 |
text-embedding-3-small | 1536 |
text-embedding-3-large | 3072 |
API Batch Size
= integer
Set the size of array of data per API call. The default size is 10. When set to 10, 1000 rows would therefore require 100 API calls.
You may wish to reduce this number if a row contains a high volume of data; and conversely, increase this number for rows with low data volume.
Embedding AWS Region
= drop-down
Select your AWS region.
Embedding Model
= drop-down
Select an embedding model.
Currently supports:
Model | Dimension |
---|---|
Titan Embeddings G1 - Text | 1536 |
API Batch Size
= integer
Set the size of array of data per API call. The default size is 10. When set to 10, 1000 rows would therefore require 100 API calls.
You may wish to reduce this number if a row contains a high volume of data; and conversely, increase this number for rows with low data volume.
Pinecone API Key
= drop-down
Use the drop-down menu to select the corresponding secret definition that denotes the value of your Pinecone API key.
Read Secret definitions to learn how to create a new secret definition.
Pinecone Index Name
= drop-down
The name of the Pinecone vector search index to connect to. The list is generated once you pass a valid Pinecone API key.
Pinecone Namespace
= string
The name of the Pinecone namespace. Pinecone lets you partition records in an index into namespaces. To retrieve a namespace name:
- Log in to Pinecone.
- Click PROJECTS in the left sidebar.
- Click a project tile. This action will open the list of vector search indexes in your project.
- Click on your vector search index tile.
- Click the NAMESPACES tab. Your namespaces will be listed.
Upsert Batch Size
= integer
Set the size of the batches of vectors that Pinecone receives. The default size is 100 vectors per request.
You may wish to reduce this number if a row contains a high volume of data; and conversely, increase this number for rows with low data volume.
Snowflake | Databricks | Amazon Redshift |
---|---|---|
✅ | ✅ | ✅ |