OpenAI Connector Setup Guide
This article describes how to set up the OpenAI connector.
How it works
This connector invokes an OpenAI model with your custom prompt and mapped Tealium data, then sends the response as a JSON event to Tealium Collect for real-time enrichment.
The OpenAI connector should be used for targeted, high-value interactions rather than high-volume events, as excessive usage may result in rate limits or increased API costs in your OpenAI account.
Prompts
The prompt sends context data to the model, asks a specific question, and defines the expected response.
Context data
The model must have context data to evaluate your request. Configure the connector action to use event data or visitor data, then reference this data object in your prompt:
- Event data:
{{event_payload}} - Visitor data:
{{visitor_profile}}
For example, include this as the first line of your prompt:
You receive a JSON object describing a customer event:
{{event_payload}}
Question
You must ask the model to evaluate the context data and generate a value. Ask for one or more specific values and be clear what you’re asking for.
For example, this prompt asks for one of three values:
Based on the provided product review event, classify the customer's sentiment as one of: "dissatisfied", "neutral", or "satisfied".
In addition, you must instruct the model how to respond in a specific JSON format, so it can be sent to your account as a valid event.
For example, this part of the prompt specifies the exact JSON format and references Tealium variables using curly braces:
Return only a single JSON object on one line with the following structure:
{
"tealium_account": "{{tealium_account}}",
"tealium_profile": "{{tealium_profile}}",
"tealium_visitor_id": "{{tealium_visitor_id}}",
"tealium_event": "openai_response",
"openai_review_sentiment": "<customer review sentiment>"
}
Set a value for tealium_event that’s unique to this connector and name the response attribute specific to this prompt. This makes it easy to create event feeds or rules that match these events.
Response event
This connector parses the response from the OpenAI model. If it’s a valid JSON object with the required Tealium parameters, the connector sends it back to your account as an incoming event.
To capture these events, create enrichment rules or event feeds that match the events generated by the OpenAI responses. For example, this rule captures sentiment events:
Example: Event trigger
In this example, the customer has just submitted a product review and the prompt asks the model to evaluate the sentiment.
Example prompt:
You receive a JSON object describing a customer's product review:
{{event_payload}}
Based on the provided product review event, classify the customer's sentiment as one of: "dissatisfied", "neutral", or "satisfied".
Return only a single JSON object on one line with the following structure:
{
"tealium_account": "{{tealium_account}}",
"tealium_profile": "{{tealium_profile}}",
"tealium_visitor_id": "{{tealium_visitor_id}}",
"tealium_event": "openai_response",
"openai_review_sentiment": "<customer review sentiment>"
}
Example response event:
{
"tealium_account": "acme",
"tealium_profile": "main",
"tealium_visitor_id": "383...05d",
"tealium_event": "openai_response",
"openai_review_sentiment": "satisfied"
}
Example: Audience trigger
In this example, the customer joined an audience named “Frequent Browser, No Purchases” and the prompt asks the model to evaluate the customer’s intent.
Example prompt:
You receive a JSON object describing a retail visitor:
{{visitor_profile}}
Based on the customer data provided, classify their likely intent as: "bargain hunting", "product comparison", "researching for later", or "not interested". If data is missing, make a best effort guess.
Return only a single JSON object on one line with the following structure:
{
"tealium_account": "{{tealium_account}}",
"tealium_profile": "{{tealium_profile}}",
"tealium_visitor_id": "{{tealium_visitor_id}}",
"tealium_event": "openai_response",
"openai_intent": "<customer intent>"
}
Example response event:
{
"tealium_account": "acme",
"tealium_profile": "main",
"tealium_visitor_id": "383...05d",
"tealium_event": "openai_response",
"openai_intent": "bargain hunting"
}
Testing
Before activating the OpenAI connector in production, enable Debug Mode in the connector mappings and test your setup with trace to validate your configuration and prompt behavior. In debug mode, the connector executes the OpenAI request but logs the result without sending the response event back to your account. In the trace tool, inspect the full request and response in real time, verify the generated output, and ensure that the trigger conditions and attribute mappings are working as expected. This helps catch errors and optimize prompts before incurring production costs or affecting live data.
Usage and cost considerations
Before activating the OpenAI connector, review your OpenAI account limits, usage tier, and pricing model. The connector can generate a high volume of requests depending on your event and audience triggers, which may result in unexpected usage or overage costs.
For more information, see OpenAI Developers: Rate limits.
Key considerations
- Usage tier and rate limits
OpenAI enforces rate limits based on your account tier (requests per minute and tokens per minute). High‑frequency events or large audiences can quickly reach these limits, causing throttling or failed requests. - Pricing and token consumption
OpenAI charges based on the number of input and output tokens processed by the model. Longer prompts, larger payloads, and higher‑capacity models increase per‑request cost. Review pricing for the specific models you plan to use. - Monthly spend and budget controls
Set usage caps or alerts in your OpenAI account to prevent unplanned spend. Without limits in place, automated workflows can accumulate significant costs. - Trigger volume
Avoid attaching the connector to high‑volume, low‑value events (such as page views). Use events or audiences that represent meaningful customer actions and occur at manageable frequency.
Best practices
To get the most value from this connector, follow these guidelines to build effective solutions:
- High-value triggers: Choose event feed or audience triggers that contain rich context or meaningful customer input. Triggering this connector for a high-volume use case may incur additional costs in your OpenAI account or lead to failed requests.
- Be specific: Include details about what the model should evaluate and what values you expect. List the exact values you expect.
- JSON format: Include a valid JSON response template that can be sent as a Tealium event.
- Response values: Reference the response values to capture. For example, if your prompt asks to evaluate the purchase intent, reference
<customer purchase intent>in the prompt where you want that value to appear in the event JSON. - Tealium data: Reference Tealium data and mapped parameters using double-curly braces. For example, to reference the mapped value for
tealium_account, write{{tealium_account}}in your prompt.
API information
This connector uses the following vendor API:
- API Name: OpenAI API
- API Version: v1
- API Endpoint:
https://api.openai.com/v1 - Documentation: OpenAI API
Configuration
Go to the Connector Marketplace and add a new connector. For general instructions on how to add a connector, see About Connectors.
After adding the connector, configure the following settings:
- API Key: The OpenAI API key used to authenticate requests made by this connector to the OpenAI API. The key must have permission to create responses (model inference) using either All or Restricted permission. It cannot have Read Only permission. For more information, see OpenAI: Assign API Key Permissions.
Actions
| Action Name | AudienceStream | EventStream |
|---|---|---|
| Send Prompt to OpenAI | ✓ | ✓ |
Send Prompt to OpenAI
This action invokes an OpenAI model with your custom prompt and mapped Tealium data. If the model responds with a valid JSON event object, this event is sent back to your account where you can capture the generated value in a real-time enrichment.
Parameters
| Parameter | Description |
|---|---|
| Model | Select the OpenAI model to use for this prompt, for example gpt-4.1-mini or gpt-4.1. We recommend using a model optimized for structured output (JSON) that supports your performance and cost requirements. For more information, see OpenAI Developers: Production best practices. |
| Add Event Payload | (Available for event actions) Check this box to include the event payload for use in the prompt template as variable {{event_payload}}. |
| Add Visitor Profile | (Available for audience actions) Check this box to include the visitor profile for use in the prompt template as variable {{visitor_profile}}. |
| Add Current Visit | (Available for audience actions) Check this box to include the current visit within the variable {{visitor_profile}}. |
| Prompt | Enter the prompt to send to the selected OpenAI model. Follow these guidelines to ensure consistent and machine-readable output: Use double curly braces ( {{ }}) to reference mapped parameters, for example: {{tealium_account}}, {{tealium_visitor_id}}, {{visitor_profile}}.Use {{event_payload}} to include the event payload in the prompt after first enabling the Add Event Payload checkbox.Avoid ambiguous phrasing; prompts should be deterministic so the output can be parsed reliably. Define a valid Tealium event JSON object to be returned in the response. Explicitly instruct the model to include tealium_account, tealium_profile, and tealium_visitor_id and your output variable in this event JSON so the connector can forward the event to the Tealium Collect endpoint.The connector automatically includes instructions in the prompt to force the model to return JSON. For example prompts, see How it works. |
Advanced Model Settings
| Parameter | Description |
|---|---|
temperature |
Controls randomness and creativity. |
max_output_tokens |
The maximum number of tokens the model can generate. |
max_tool_calls |
The maximum number of total calls to built-in tools that can be processed in a response. |
top_p |
Controls diversity by sampling only from the top-p probability mass. |
parallel_tool_calls |
Whether to allow the model to run tool calls in parallel. Defaults to true. |
prompt_cache_key |
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. |
service_tier |
Specifies the processing type used for serving the request. |
tool_choice |
How the model should select which tool or tools to use when generating a response. |
| Debug Mode | When debug mode is enabled, the connector accepts the raw OpenAI response without sending it to Tealium Collect. Use trace to validate the response format before enabling full processing. |
This page was last updated: February 11, 2026