OpenAI
Jump to Section
The OpenAI connection allows the Intelligence Hub to interact with OpenAI models for generative AI and LLM workflows. This connection enables sending prompts and receiving responses from models hosted by OpenAI. Below are details on the connection and input settings.
Use care when integrating this connection into production workflows. Due to the rapid pace of AI and LLM development, providers have made breaking changes to their APIs in the past. While we aim to respond quickly to such changes, they may temporarily disrupt production systems.
Connection Settings
Setting | Description |
---|---|
Password | The password used to authenticate with the OpenAI service. This is typically an API key. |
Base URL | (Optional) Base URL for the OpenAI API. If not provided, the default endpoint is used. Many LLM engines support the OpenAI API, and this can be used to interact with local LLM endpoints (ex. LM Studio). |
Input Settings
Setting | Description |
---|---|
Model | An identifier for the model to use. This is typically a model name or ID provided by OpenAI. Common models are “gpt-4” or “o4-mini” for example. |
Instructions | (Optional) The prompt or instructions to guide the model’s response. |
Message | The message to be sent to the model. This is the input the model will process based on the instructions. |
Response Format | The format in which the model should respond. When set to JSON the connection will attempt to parse the response as JSON. If the response is not valid JSON the read fails. |