generates text that continues the input prompt.


combines multiple prompti together.


returns the specified property of the generated text.

Details and Options

  • LLMSynthesize generates text according to the instruction specified in the prompt. It can create content, complete sentences, extract information and more.
  • Possible values for prompt can be:
  • "string"static text
    LLMPrompt["name"]a repository prompt
    StringTemplate[]templated text
    TemplateObject[]template for creating a text
    {prompt1,}a list of prompts
  • Template objects are automatically converted to strings via TemplateObject[][].
  • Supported values for prop include:
  • "FullText"full text of prompts and completion
    "FullTextAnnotations"association annotating prompts and completion
    "CompletionText"completion text
    "CompletionToolsText"completion text including tool interactions
    "PromptText"prompt text
    "ToolRequests"association of LLMToolRequest objects
    "ToolResponses"association of LLMToolResponse objects
    {prop1,prop2,}multiple properties
    Allall properties
  • "PromptCompletionAnnotations", "LLMToolRequest" and "ToolResponses" give associations with elements in the format {start,end}val where val refers to a object, and start and end refer to the span of characters where val is specified in "PromptCompletionText".
  • The following options can be specified:
  • Authentication Automaticexplicit user ID and API key
    MaxItems Infinitymaximum number of tokens to generate
    LLMEvaluator $LLMEvaluatorLLM configuration to use
  • LLMEvaluator can be set to an LLMConfiguration object or an association with any of the following keys:
  • "Model"base model
    "Temperature"sampling temperature
    "TotalProbabilityCutoff"sampling probability cutoff (nucleus sampling)
    "PromptDelimiter"delimiter to use between prompts
    "StopTokens"tokens on which to stop generation
    "Tools"list of LLMTool objects to use
    "ToolPrompt"prompt for specifying tool format
    "ToolRequestParser"function for parsing tool requests
    "ToolResponseString"function for serializing tool responses
  • Valid forms of "Model" include:
  • namenamed model
    {service,name}named model from service
    <|"Service"service,"Name"name,"Task"task|>fully specified model
  • The generated text is sampled from a distribution. Details of the sampling can be specified using the following properties of the LLMEvaluator:
  • "Temperature"t1sample using a positive temperature t
    "TopProbabilityCutoff"p1sample among the most probable choices with an accumulated probability of at least p (nucleus sampling)
  • Multiple prompts are separated by the "PromptDelimiter" property of the LLMEvaluator.
  • Possible values for Authentication are:
  • Automaticchose the authentication scheme automatically
    Environmentcheck for a key in the environment variables
    SystemCredentialcheck for a key in the system keychain
    ServiceObject[]inherit the authentication from a service object
    assocprovide explicit key and user ID
  • With AuthenticationAutomatic, the function checks the variable "OPENAI_API_KEY" in Environment and SystemCredential, otherwise, it uses ServiceConnect["OpenAI"].
  • When using Authenticationassoc, assoc can contain the following keys:
  • "ID"user identity
    "APIKey"API key used to authenticate
  • LLMSynthesize uses machine learning. Its methods, training sets and biases included therein may change and yield varied results in different versions of the Wolfram Language.


open allclose all

Basic Examples  (3)

Synthesize text based on a simple description:

Ask a question:

Return the full context of the LLM:

Scope  (2)

Synthesize text based on a prompt:

Specify a property to return:

Options  (7)

Authentication  (4)

Provide an authentication key for the API:

Provide both a user ID and the API key:

Store the API key using the operating system's keychain:

Look for the key in the system keychain:

Store the API key in an environment variable:

Look for the key in the system environment:

Authenticate via a service object:

MaxItems  (1)

By default, the text generation continues until a termination token is generated:

Limit the amount of generated samples (tokens):

LLMEvaluator  (2)

Specify that the sampling should be performed at zero temperature:

Specify a high temperature to get more variation in the generation:

Specify the maximum cumulative probability before cutting off the distribution:

Applications  (1)

Define a function that builds a prompt programmatically:

Use it to create a natural language synonym generator:

Apply it to a sequence of arguments:

Possible Issues  (1)

The text generation is not guaranteed to follow instructions to the letter:

Use exact arithmetic for precise computations:

Wolfram Research (2023), LLMSynthesize, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMSynthesize.html.


Wolfram Research (2023), LLMSynthesize, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMSynthesize.html.


Wolfram Language. 2023. "LLMSynthesize." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/LLMSynthesize.html.


Wolfram Language. (2023). LLMSynthesize. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LLMSynthesize.html


@misc{reference.wolfram_2023_llmsynthesize, author="Wolfram Research", title="{LLMSynthesize}", year="2023", howpublished="\url{https://reference.wolfram.com/language/ref/LLMSynthesize.html}", note=[Accessed: 24-September-2023 ]}


@online{reference.wolfram_2023_llmsynthesize, organization={Wolfram Research}, title={LLMSynthesize}, year={2023}, url={https://reference.wolfram.com/language/ref/LLMSynthesize.html}, note=[Accessed: 24-September-2023 ]}