Table of Contents

Chat Completion

The Chat Completion returns the response from an OpenAI chat model given a user prompt.

img

Example img
This flow processes a user's chat question by first receiving it through a Chat completion, then converting it into a vector using a Text embedder, performing a Vector search in a postgreSQL database for relevant context, and finally passing the user input and retrieved context to OpenAI Chat completion, which generates a response that is returned to the client via the Return node.

Properties

Name Type Description
Title Optional The title of the action.
Connection Required Defines the connection to OpenAI.
Model id Required Id of the chosen Model.
User Prompt Required The input message from the user, which the model processes to generate a response.
System Prompt Optional A system-level instruction that guides the model’s behavior and response style.
History Optional A record of past interactions that provides context to the conversation, helping the model maintain continuity.
Temperature Optional Temperature in models controls the randomness and creativity of the generated responses. Lower temperatures (e.g., 0.2) produce more focused, predictable text, ideal for tasks that require precision. Higher temperatures (e.g., 1.5) increase creativity and variability, but may risk generating less coherent or relevant content, making it important to adjust based on your desired outcome. The default is 0.7 if nothing is defined by the user.
Max Completion Tokens Optional Sets a limit on the number of tokens (words, characters, or pieces of text) in the model’s response.
Result Variable Name Optional Stores the generated AI response. Default: "response".
Description Optional Additional details or notes regarding the chat completion setup.