LLMSynthesize
LLMSynthesize[prompt]
generates text according to the input prompt.
LLMSynthesize[{prompt1,…}]
combines multiple prompti together.
LLMSynthesize[…,prop]
returns the specified property of the generated text.
Details and Options
- LLMSynthesize generates text according to the instruction in the prompt using a large language model (LLM). It can create content, complete sentences, extract information and more.
- LLMSynthesize requires external service authentication, billing and internet connectivity.
- Possible values for prompt can be:
-
"text" static text LLMPrompt["name"] a repository prompt StringTemplate[…] templated text TemplateObject[…] template for creating a prompt Image[…] an image {prompt1,…} a list of prompts - Template objects are automatically converted to strings via TemplateObject[…][].
- A prompt created with TemplateObject can contain text and images.
- Not every LLM supports image input.
- Supported values for prop include:
-
"CompletionText" textual answer by the LLM "CompletionToolsText" textual answer including tool interactions "FullText" string representation of "History" "History" complete history including prompt and completion "Prompt" content submitted to the LLM "PromptText" string representation of "Prompt" "ToolRequests" list of LLMToolRequest objects "ToolResponses" list of LLMToolResponse objects "Usage" token usage {prop1,prop2,…} multiple properties All all properties - "FullTextAnnotations", "ToolRequests" and "ToolResponses" give associations with elements in the format {start,end}val, where val refers to an object and start and end refer to the span of characters where val is specified in "FullText".
- The following options can be specified:
-
Authentication Automatic explicit user ID and API key LLMEvaluator $LLMEvaluator LLM configuration to use ProgressReporting $ProgressReporting how to report the progress of the computation - LLMEvaluator can be set to an LLMConfiguration object or an association with any of the following keys:
-
"MaxTokens" maximum amount of tokens to generate "Model" base model "PromptDelimiter" string to insert between prompts "Prompts" initial prompts or LLMPromptGenerator objects "StopTokens" tokens on which to stop generation "Temperature" sampling temperature "ToolMethod" method to use for tool calling "Tools" list of LLMTool objects to make available "TopProbabilities" sampling classes cutoff "TotalProbabilityCutoff" sampling probability cutoff (nucleus sampling) - Valid forms of "Model" include:
-
name named model {service,name} named model from service <"Service"service,"Name"name,"Task"task > fully specified model - Possible values for task include "Chat" and "Completion".
- The generated text is sampled from a distribution. Details of the sampling can be specified using the following properties of the LLMEvaluator:
-
"Temperature"t Automatic sample using a positive temperature t "TopProbabilities"k Automatic sample only among the k highest-probability classes "TotalProbabilityCutoff"p Automatic sample among the most probable choices with an accumulated probability of at least p (nucleus sampling) - The Automatic value of these parameters uses the default for the specified "Model".
- Prompts specified in the "Prompts" property of the LLMEvaluator are prepended to the input prompt with role set as "System" if task is "Chat"
- Multiple prompts are separated by the "PromptDelimiter" property of the LLMEvaluator.
- Possible values for Authentication are:
-
Automatic choose the authentication scheme automatically Environment check for a key in the environment variables SystemCredential check for a key in the system keychain ServiceObject[…] inherit the authentication from a service object assoc provide an explicit key and user ID - With AuthenticationAutomatic, the function checks the variable ToUpperCase[service]<>"_API_KEY" in Environment and SystemCredential; otherwise, it uses ServiceConnect[service].
- When using Authenticationassoc, assoc can contain the following keys:
-
"ID" user identity "APIKey" API key used to authenticate - LLMSynthesize uses machine learning. Its methods, training sets and biases included therein may change and yield varied results in different versions of the Wolfram Language.
Examples
open allclose allBasic Examples (3)
Scope (3)
Options (8)
Authentication (4)
LLMEvaluator (4)
By default, the text generation continues until a termination token is generated:
Limit the amount of generated samples (tokens):
Specify that the sampling should be performed at zero temperature:
Specify a high temperature to get more variation in the generation:
Specify the maximum cumulative probability before cutting off the distribution:
Specify the service and the model to use for the generation:
Applications (1)
Text
Wolfram Research (2023), LLMSynthesize, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMSynthesize.html (updated 2025).
CMS
Wolfram Language. 2023. "LLMSynthesize." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2025. https://reference.wolfram.com/language/ref/LLMSynthesize.html.
APA
Wolfram Language. (2023). LLMSynthesize. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LLMSynthesize.html