"OpenAI" (Service Connection)
Use the OpenAI API with the Wolfram Language.
Connecting & Authenticating
Requests
Examples
open allclose allScope (12)
Text (4)
Completion (1)
Chat (2)
Respond to a chat containing multiple messages:
Change the sampling temperature:
Increase the number of characters returned:
Allow the model to use an LLMTool:
Image (4)
ImageCreate (2)
ImageVariation (1)
Audio (4)
AudioTranscription (1)
Transcribe an Audio object:
Use a prompt to provide context for the transcription:
Transcribe a recording made in a different language:
AudioTranslation (1)
Translate an Audio object into English:
SpeechSynthesize (1)
Use a different voice for the synthesis:
"TestConnection" — returns Success for working connection, Failure otherwise
Text
"Completion" — create text completion for a given prompt
"Prompt" | (required) | the prompt for which to generate completions | |
"BestOf" | Automatic | number of completions to generate before selecting the "best" | |
"Echo" | Automatic | include the prompt in the completion | |
"FrequencyPenalty" | Automatic | penalize tokens based on their existing frequency in the text so far (between -2 and 2) | |
"LogProbs" | Automatic | include the log probabilities on the most likely tokens, as well as the chosen tokens (between 0 and 5) | |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of completions to return | |
"PresencePenalty" | Automatic | penalize new tokens based on whether they appear in the text so far (between -2 and 2) | |
"StopTokens" | None | up to four strings where the API will stop generating further tokens | |
"Stream" | Automatic | return the result as server-sent events | |
"Suffix" | Automatic | suffix that comes after a completion | |
"Temperature" | Automatic | sampling temperature (between 0 and 2) | |
"ToolChoice" | Automatic | which (if any) tool is called by the model | |
"Tools" | Automatic | one or more LLMTool objects available to the model | |
"TotalProbabilityCutoff" | None | an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass | |
"User" | Automatic | unique identifier representing the end user |
"Chat" — create a response for the given chat conversation
"Messages" | (required) | a list of messages in the conversation, each given as an association with "Role" and "Content" keys | |
"FrequencyPenalty" | Automatic | penalize tokens based on their existing frequency in the text so far (between -2 and 2) | |
"LogProbs" | Automatic | include the log probabilities on the most likely tokens, as well as the chosen tokens (between 0 and 5) | |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of chat completions to return | |
"PresencePenalty" | Automatic | penalize new tokens based on whether they appear in the text so far (between -2 and 2) | |
"StopTokens" | None | up to four strings where the API will stop generating further tokens | |
"Stream" | Automatic | return the result as server-sent events | |
"Suffix" | Automatic | suffix that comes after a completion | |
"Temperature" | Automatic | sampling temperature (between 0 and 2) | |
"ToolChoice" | Automatic | which (if any) tool is called by the model | |
"Tools" | Automatic | one or more LLMTool objects available to the model | |
"TotalProbabilityCutoff" | None | an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass | |
"User" | Automatic | unique identifier representing the end user |
"Embedding" — create an embedding vector representing the input text
"Input" | (required) | one or a list of texts to get embeddings for | |
"EncodingFormat" | Automatic | format to return the embeddings | |
"EncodingLength" | Automatic | number of dimensions of the result | |
"Model" | Automatic | name of the model to use | |
"User" | Automatic | unique identifier representing the end user |
Image
"ImageCreate" — create a square image given a prompt
"Prompt" | (required) | text description of the desired image | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of images to generate | |
"Quality" | Automatic | control the quality of the result; possible values include "hd" | |
"Size" | Automatic | size of the generated image | |
"Style" | Automatic | style of generated images; possible values include "vivid" or "natural" | |
"User" | Automatic | unique identifier representing the end user |
"ImageVariation" — create a variation of a given image
"Image" | (required) | image to use as the basis for the variation | |
"N" | Automatic | number of images to generate | |
"Size" | Automatic | size of the generated image | |
"User" | Automatic | unique identifier representing the end user |
"ImageEdit" — create an edited image given an original image and a prompt
"Image" | (required) | image to edit; requires an alpha channel if a mask is not provided | |
"Mask" | None | additional image whose fully transparent areas indicate where the input should be edited | |
"N" | Automatic | number of images to generate | |
"Prompt" | None | text description of the desired image edit | |
"Size" | Automatic | size of the generated image | |
"User" | Automatic | unique identifier representing the end user |
Audio
"AudioTranscription" — transcribe an audio recording into the input language
"Audio" | (required) | the Audio object to transcribe | |
"Language" | Automatic | language of the input audio | |
"Model" | Automatic | name of the model to use | |
"Prompt" | None | optional text to guide the model's style or continue a previous audio segment | |
"Temperature" | Automatic | sampling temperature (between 0 and 1) | |
"TimestampGranularities" | Automatic | the timestamp granularity of transcription (either "word" or "segment") |
"AudioTranslation" — translate an audio recording into English
"Audio" | (required) | the Audio object to translate | |
"Model" | Automatic | name of the model to use | |
"Prompt" | None | optional text to guide the model's style or continue a previous audio segment | |
"Temperature" | Automatic | sampling temperature (between 0 and 1) |
"SpeechSynthesize" — synthesize speech from text
"Input" | (required) | the text to synthesize | |
"Model" | Automatic | name of the model to use | |
"Speed" | Automatic | the speed of the produced speech | |
"Voice" | Automatic | the voice to use for the synthesis |
Model Lists
"ChatModelList" — list models available for the "Chat" request
"CompletionModelList" — list models available for the "Completion" request
"EmbeddingModelList" — list models available for the "Embedding" request
"ModerationModelList" — list models available for the "Moderation" request
"ImageModelList" — list models available for the image-related requests
"SpeechSynthesizeModelList" — list models available for the "SpeechSynthesize" request
"AudioModelList" — list models available for the "AudioTranscribe" request
Moderation
"Moderation" — classify if text violates OpenAI's Content Policy
"Input" | (required) | the text to classify | |
"Model" | Automatic | name of the model to use |
ServiceExecute ▪ ServiceConnect ▪ LLMFunction ▪ LLMSynthesize ▪ ChatEvaluate ▪ LLMConfiguration ▪ ImageSynthesize ▪ SpeechRecognize ▪ "AlephAlpha" ▪ "Anthropic" ▪ "Cohere" ▪ "DeepSeek" ▪ "GoogleGemini" ▪ "Groq" ▪ "MistralAI" ▪ "TogetherAI" ▪ "GoogleSpeech"
Generate a response from a chat:
Compute the embedding for a sentence:
Generate an Image from a prompt:
Transcribe an Audio object: