-
See Also
- LLMSynthesize
-
- Service Connections
- OpenAI
- Anthropic
- GoogleGemini
- AlephAlpha
- Cohere
- DeepSeek
- Groq
- MistralAI
- TogetherAI
-
-
See Also
- LLMSynthesize
-
- Service Connections
- OpenAI
- Anthropic
- GoogleGemini
- AlephAlpha
- Cohere
- DeepSeek
- Groq
- MistralAI
- TogetherAI
-
See Also
"xAI" (Service Connection)
Connecting & Authenticating
Requests
"TestConnection" — returns Success for working connection, Failure otherwise
"Chat" — create a response for the given chat conversation
"Messages" | (required) | a list of messages in the conversation, each given as an association with "Role" and "Content" keys | |
"FrequencyPenalty" | Automatic | penalize tokens based on their existing frequency in the text so far (between -2 and 2) | |
"LogProbs" | Automatic | include the log probabilities on the most likely tokens, as well as the chosen tokens (between 0 and 5) | |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of chat completions to return | |
"PresencePenalty" | Automatic | penalize new tokens based on whether they appear in the text so far (between -2 and 2) | |
"StopTokens" | None | up to four strings where the API will stop generating further tokens | |
"Stream" | Automatic | return the result as server-sent events | |
"Temperature" | Automatic | sampling temperature (between 0 and 2) | |
"ToolChoice" | Automatic | which (if any) tool is called by the model | |
"Tools" | Automatic | one or more LLMTool objects available to the model | |
"TotalProbabilityCutof" | None | an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass | |
"User" | Automatic | unique identifier representing the end user |
"ImageCreate" — create a square image given a prompt
"Prompt" | (required) | text description of the desired image | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of images to generate | |
"Quality" | Automatic | control the quality of the result; possible values include "hd" | |
"User" | Automatic | unique identifier representing the end user |
Model Lists
"ChatModelList" — list models available for the "Chat" request
"ImageModelList" — list models available for the image-related requests
Examples
open all close allBasic Examples (1)
Scope (3)
Text (2)
Chat (2)
Respond to a chat containing multiple messages:
Change the sampling temperature:
Limit the number of characters returned:
Allow the model to use an LLMTool:
Send a chat request asynchronously using ServiceSubmit and collect the response using the HandlerFunctions and HandlerFunctionsKeys options:
Image (1)
ImageCreate (1)
Authentication (4)
If no connections exist, ServiceConnect will prompt a dialog where an API key can be entered:
The API key can also be specified using the Authentication option:
Use credentials stored in SystemCredential:
The credentials are stored directly by the framework, since SystemCredential["key"] evaluates to a string:
Only store the SystemCredential key name rather than its value by using RuleDelayed:
Retrieve the value of the authentication credentials used in a specific service object:
Overwrite the authentication credentials of an existing service object:
See Also
Service Connections: OpenAI ▪ Anthropic ▪ GoogleGemini ▪ AlephAlpha ▪ Cohere ▪ DeepSeek ▪ Groq ▪ MistralAI ▪ TogetherAI