Skip to content

Azure OpenAI Prompt

The Azure OpenAI Prompt component uses a large language model (LLM) to provide responses to user-composed prompts. The component takes one or more inputs from your source table, combines the inputs with user prompts, and sends this data to the LLM for processing.

You can configure the component to output the results as either a text value or a JSON object. The results will be stored—along with other metadata—in a destination table on your cloud data warehouse.

We recommend that you read Introduction to prompt engineering if you are new to prompt engineering.

Alternative components include OpenAI Prompt and Amazon Bedrock Prompt.


Properties

Name = string

A human-readable name for the component.


Endpoint = string

URL endpoint denoting access to your Azure OpenAI deployment. You can find this in the Keys & Endpoint section when examining your resources in the Azure portal. Example: https://docs-test-001.openai.azure.com/.


Deployment Name = string

The name of your Azure deployment model. For example, GPT-Turbo.


API Key = string

Use the drop-down menu to select the corresponding secret definition that denotes the value of your OpenAI API key. Read Secret definitions to learn how to create a new secret definition.

You can find your keys in the Keys & Endpoint section when examining your resources in the Azure portal. Use the value of either KEY1 or KEY2.


Temperature = floating point number

Set the input temperature. Accepts decimal values between 0 and 2. Higher values command the model to take more risks. Microsoft's documentation for Azure OpenAI suggests the following:

Try 0.9 for more creative applications, and 0 for ones with a well-defined answer.


Top P = floating point number

An alternative method to sampling with "temperature" called "nucleus sampling". The model "considers" the results of the tokens, with top_p probability mass. Therefore, 0.1 means that only tokens comprising the top 10% of probability mass would be considered.

We recommend altering Top P or Temperature, but not both.


N = integer

You can choose to generate more than one response for your input data when the output is JSON. This parameter can quickly consume your token quota—use carefully. Default is 1.


Max Tokens = integer

The maximum number of tokens to process. Each model has its own maximum token allowance, and this must be considered. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).


Database = drop-down

The Snowflake source database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake source schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Table = drop-down

An existing Snowflake table to use as the input.


Key Column = drop-down

Set a column as the primary key. Join the results back to the input table.


Limit = integer

Set a limit for the numbers of rows from the table to load. The default is 1000.


User Context = text editor

Provide your prompt. When Output Format is TEXT, this property is where you must specify all of the questions that you wish the LLM to provide answers to.

Prompts should define the following information:

  • A persona. Who or what should the model impersonate when contextualizing their generative responses?
  • A context. Contextualize the situation for the model to enhance its responses.
  • A tone. What kind of language do you want the model to use?

Providing an example output may improve performance.


Inputs = column editor

Select the source columns to feed as input to the prompt component.

  • Column Name: A column from the input table.
  • Descriptive Name: An alternate descriptive name to better contextualize the column. Recommended if your column names are low-context.

Output Format

Choose TEXT or JSON. Choosing JSON will activate an additional property, Outputs.


Outputs = column editor

JSON only.

Define the output columns the prompt component will generate.

  • Output: Key of a key:value JSON pair. For example, an output might be "review_score".
  • Context: Text that defines the output you expect the model to provide—that is, some task for the model to perform. For example, "Give a score between 0 and 10 on the level of satisfaction you feel in the user's review where 0 is completely dissatisfied and 10 is extremely satisfied." You may wish to use this parameter to configure the tone of the model (where applicable).

Database = drop-down

The Snowflake destination database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake destination schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Table = string

The new Snowflake table to load your prompt output into. Will create a new table if none exists. Otherwise, will replace any existing table of the same name.


Create Table Options

  • Replace if Table Exists: The pipeline will run despite the table already existing. The table will be recreated.
  • Fail if Table Exists: If the table already exists, the pipeline will fail to run.

Enable RAG = boolean

Click Yes to enable Retrieval-Augmented Generation (RAG). Using RAG optimizes an LLM output by invoking an authoritative knowledge base outside of the LLM's initial training data sources. By using RAG, you can extend an LLM's capabilities to specific domains, such as your organization's documentation, without needing to retrain the model.

Defaults to No.


Pretext = text editor

Add text to your LLM prompt before the RAG data is listed, thus instructing the LLM what to do with the RAG data. For example, you might wish to use RAG to search relevant documentation snippets to answer a question.

Example:

"Use any of the following documentation snippets in your response, if relevant:"


Search Column = drop-down

Choose a column in the source table that contains a search term for the vector database. The value is then taken from that column and a vector search is performed. For example, a column value might be a user question such as "How do I log in?". A search is then performed on the vector database using the value string, which will return N number of relevant data. N is defined by the Top K parameter, further down.

If your vector database contained vectors created from chunks of text documentation, in this scenario the RAG data returned may include the chunk "to log in, click on the key button in the top right and enter your username and password". This data is then inserted into the llm prompt to help provide relevant context.


Embedding Provider = drop-down

The embedding provider is the API service used to convert the search term into a vector. Currently supports OpenAI. The embedding provider receives a search term (e.g. "How do I log in?") and returns a vector.


OpenAI API Key = drop-down

Use the drop-down menu to select the corresponding secret definition that denotes the value of your OpenAI API key.

Read Secret definitions to learn how to create a new secret definition.

To create a new OpenAI API key:

  1. Log in to OpenAI.
  2. Click your avatar in the top-right of the UI.
  3. Click View API keys.
  4. Click + Create new secret key.
  5. Give a name for your new secret key and click Create secret key.
  6. Copy your new secret key and save it. Then click Done.

Embedding Model = drop-down

Select an embedding model.

Currently supports:

  • text-embedding-ada-002
  • text-embedding-3-small
  • text-embedding-3-large

Vector Database = drop-down

Select a vector database to use.

Currently supports Pinecone.


Pinecone API Key = drop-down

Use the drop-down menu to select the corresponding secret definition that denotes the value of your Pinecone API key.

Read Secret definitions to learn how to create a new secret definition.

To create a new Pinecone API key:

  1. Log in to your organization's Pinecone account.
  2. Click API Keys in the left sidebar.
  3. Click + Create API Key. You must be a project owner to create an API key.
  4. In the Create New API Key modal, name your new key. Key names cannot be more than 7 characters.
  5. Click Create API Key.
  6. Your new API key will be listed. Click the Show Key Value button (an icon of an eye with a slash through it) to view your API key value.
  7. Click the adjacent Copy Key Value button to copy your API Key.

Pinecone Environment = string

The Pinecone environment to use. To retrieve an environment name:

  1. Log in to Pinecone.
  2. Click PROJECTS in the left sidebar.
  3. Click a project tile. This action will open the list of vector search indexes in your project.
  4. Each vector search index has an environment displayed in the tile.

Pinecone Project ID = string

The ID of your Pinecone project. To retrive a project ID:

  1. Log in to Pinecone.
  2. Click PROJECTS in the left sidebar. Each existing project will be displayed as a tile, displaying the project name and project ID.

Read Understanding projects to learn more.


Pinecone Index = string

The name of the Pinecone vector search index to connect to. To retrieve an index name:

  1. Log in to Pinecone.
  2. Click PROJECTS in the left sidebar.
  3. Click a project tile. This action will open the list of vector search indexes in your project.

Pinecone Namespace = string

The name of the Pinecone namespace. Pinecone lets you partition records in an index into namespaces. To retrieve a namespace name:

  1. Log in to Pinecone.
  2. Click PROJECTS in the left sidebar.
  3. Click a project tile. This action will open the list of vector search indexes in your project.
  4. Click on your vector search index tile.
  5. Click the NAMESPACES tab. Your namespaces will be listed.

Top K = integer

The number of results to return from the vector database query.

Between 1-100.

Default is 3.


Data Lookup Strategy = drop-down

Select the data lookup strategy. Pinecone only stores the vector associated with text data, and a JSON metadata blob. While the text data can be stored in the metadata blob, size limitations can affect coverage—for example when a user has a larger blob of text to be converted to a vector.

  • Raw data in metadata: Choosing this option adds an additional property, Data Path, to provide the path to text data within the metadata JSON blob.
  • Table details in metadata: Database, schema, and table information is used in the metadata to look up the text data in your warehouse table.

Data Path = string

Set the path to the data in the metadata JSON blob.

Default is data.


Explanation of output

Output column Description
DATA A JSON set of key:value pairs when using the JSON.
PROMPT_TOKENS Number of tokens in the generated completion.
COMPLETION_TOKENS Number of tokens in the prompt.
ALL_TOKENS Total number of tokens used in the request (prompt + completion).
RAW_DATA All responses received from the LLM. If you have set the parameter N to a number greater than 1, then you will see multiple responses for each record.
ERROR_METADATA Contains information about errors that occurred when processing that row, such as if no valid JSON was found, or the request to the LLM timed out. Useful for debugging results where no selected result, and/or no raw data is available in the output.
SYSTEM_METADATA This is a fundamental instruction from Matillion's component for the LLM, advising how it should process and respond to the information it receives. For example: "You are a data analysis and exploration tool. You receive a context prompt from the user, a data object, and an output format. Generate the output in a valid JSON format, including only the output variables without any headers or explanations. Escape any values that would not be valid in a JSON string, such as newlines and double quotes."
USER_PROMPT Prompt defined by the user in the Context property of the OpenAI prompt, Amazon Bedrock Prompt, or Azure OpenAI Prompt component.
MODEL_NAME The specific name of the model.
PROVIDER_NAME Name of the LLM provider.

Snowflake Databricks Amazon Redshift (preview)

Video Example