LLMSynthesizeSubmit
LLMSynthesizeSubmit[prompt]
generates text asynchronously according to the input prompt.
LLMSynthesizeSubmit[{prompt1,…}]
combines multiple prompti together.
Details and Options
- LLMSynthesizeSubmit generates text asynchronously according to the instruction in the prompt using a large language model (LLM). It can create content, complete sentences, extract information and more.
- LLMSynthesizeSubmit requires external service authentication, billing and internet connectivity.
- Possible values for prompt include:
-
"text" static text LLMPrompt["name"] a repository prompt StringTemplate[…] templated text TemplateObject[…] template for creating a prompt Image[…] a static image (not supported by all LLMs) {prompt1,…} a list of prompts - Static content in prompt can be disambiguated using an explicit association syntax:
-
<"Type""Text","Data"data > an explicit text part <"Type""Image","Data"data > an explicit image part (supports File[…] objects) - Template objects are automatically converted to strings via TemplateObject[…][].
- Prompts created with TemplateObject can contain text and images.
- LLMSynthesizeSubmit returns a TaskObject[…].
- The following options can be specified:
-
Authentication Inherited explicit user ID and API key HandlerFunctions how to handle generated events HandlerFunctionsKeys Automatic parameters to supply to handler functions LLMEvaluator Inherited LLM configuration to use - During the asynchronous execution of LLMSynthesizeSubmit, events can be generated.
- Events triggered by the LLM:
-
"ContentChunkReceived" incremental message content received "StoppingReasonReceived" stopping reason for the generation received "MetadataReceived" other metadata received "ToolRequestReceived" LLMToolRequest[…] received "UsageInformationReceived" incremental usage information received - Events triggered by local processing:
-
"CompletionGenerated" the completion is generated "ToolResponseGenerated" an LLMToolResponse[…] is generated - Events triggered by the task framework:
-
"FailureOccurred" failure is generated during the computation "TaskFinished" task is completely finished "TaskRemoved" task is being removed "TaskStarted" task is started "TaskStatusChanged" task status changed - HandlerFunctionsf uses f for all the events.
- With the specification HandlerFunctions-><…,"eventi"->fi,… >, fi[assoc] is evaluated whenever eventi is generated. The elements of assoc have keys specified by the setting for HandlerFunctionsKeys.
- Possible keys specified by HandlerFunctionsKeys include:
-
"CompletionText" textual answer by the LLM "CompletionToolsText" textual answer including tool interactions "ContentChunk" a message part "EventName" the name of the event being handled "Failure" failure object generated if task failed "FullText" string representation of "History" "History" complete history including prompt and completion "Model" model used to generate the message "Prompt" content submitted to the LLM "PromptText" string representation of "Prompt" "StoppingReason" why the generation has stopped "Task" the task object generated by LLMSynthesizeSubmit "TaskStatus" the status of the task "ToolRequest" last generated LLMToolRequest[…] "Timestamp" timestamp of the message "ToolRequests" list of LLMToolRequest objects "ToolResponse" last generated LLMToolResponse[…] "ToolResponses" list of LLMToolResponse objects "Usage" token usage "UsageIncrement" token usage update {key1,…} a list of keys All all keys Automatic figures out the keys from HandlerFunctions - Values that have not yet been received are given as Missing["NotAvailable"].
- LLMEvaluator can be set to an LLMConfiguration object or an association with any of the following keys:
-
"MaxTokens" maximum amount of tokens to generate "Model" base model "PromptDelimiter" string to insert between prompts "Prompts" initial prompts or LLMPromptGenerator objects "StopTokens" tokens on which to stop generation "Temperature" sampling temperature "ToolMethod" method to use for tool calling "Tools" list of LLMTool objects to make available "TopProbabilities" sampling classes cutoff "TotalProbabilityCutoff" sampling probability cutoff (nucleus sampling) - Valid forms of "Model" include:
-
name named model {service,name} named model from service <"Service"service,"Name"name > fully specified model - Multiple prompts are separated by the "PromptDelimiter" property.
- The generated text is sampled from a distribution. Details of the sampling can be specified using the following properties of the LLMEvaluator:
-
"Temperature"t Automatic sample using a positive temperature t "TopProbabilities"k Automatic sample only among the k highest-probability classes "TotalProbabilityCutoff"p Automatic sample among the most probable choices with an accumulated probability of at least p (nucleus sampling) - The Automatic value of these parameters uses the default for the specified "Model".
- Possible values for "ToolMethod" include:
-
"Service" rely on the tool mechanism of service "Textual" use prompt-based tool calling - Possible values for Authentication are:
-
Automatic choose the authentication scheme automatically Environment check for a key in the environment variables SystemCredential check for a key in the system keychain ServiceObject[…] inherit the authentication from a service object assoc provide explicit key and user ID - With AuthenticationAutomatic, the function checks the variable ToUpperCase[service]<>"_API_KEY" in Environment and SystemCredential; otherwise, it uses ServiceConnect[service].
- When using Authenticationassoc, assoc can contain the following keys:
-
"ID" user identity "APIKey" API key used to authenticate - LLMSynthesizeSubmit uses machine learning. Its methods, training sets and biases included therein may change and yield varied results in different versions of the Wolfram Language.
Examples
open allclose allBasic Examples (2)
Scope (3)
Options (8)
Authentication (4)
LLMEvaluator (4)
By default, the text generation continues until a termination token is generated:
Limit the amount of generated samples (tokens):
Specify that the sampling should be performed at zero temperature:
Specify a high temperature to get more variation in the generation:
Specify the maximum cumulative probability before cutting off the distribution:
Specify the service and the model to use for the generation:
Text
Wolfram Research (2025), LLMSynthesizeSubmit, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html.
CMS
Wolfram Language. 2025. "LLMSynthesizeSubmit." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html.
APA
Wolfram Language. (2025). LLMSynthesizeSubmit. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html