represents a configuration for an LLM.


creates a configuration based on $LLMEvaluator with the specified property set to val.


specifies several properties and values.


creates a configuration based on an existing configuration.


  • LLMConfiguration objects can be used with functions such as LLMSynthesize, ChatObject and ChatEvaluate through the LLMEvaluator option.
  • $LLMEvaluator is set to an LLMConfiguration.
  • Supported properties of LLMConfiguration objects include:
  • "MaxTokens"maximum amount of tokens to generate
    "Model"base model
    "PromptDelimiter"string to insert between prompts
    "Prompts"initial prompts
    "StopTokens"tokens on which to stop generation
    "Temperature"sampling temperature
    "ToolMethod"method to use for tool calling
    "Tools"list of LLMTool objects to make available
    "TopProbabilities"sampling classes cutoff
    "TotalProbabilityCutoff"sampling probability cutoff (nucleus sampling)
  • Valid settings for "Model" include:
  • namenamed model
    {service,name}named model from service
    <|"Service"service,"Name"name|>fully specified model
  • Text generated by an LLM is sampled from a distribution. Details of the sampling can be specified using the following properties of the LLMConfiguration:
  • "Temperature"t1sample using a positive temperature t
    "TopProbabilityCutoff"p1sample among the most probable choices with an accumulated probability of at least p (nucleus sampling)
  • The setting for "PromptDelimiter" determines how multiple prompts are joined.
  • Valid settings for "ToolMethod" include:
  • "OpenAI"OpenAI's function mechanism
    "Textual"generic textual tool calling
    assocspecific textual prompting and parsing
  • Valid keys in assoc include:
  • "ToolPrompt"prompt specifying tool format
    "ToolRequestParser"function for parsing tool requests
    "ToolResponseInsertionFunction"function for serializing tool responses
  • The prompt specified by "ToolPrompt" is only used if at least one tool is specified.
  • "ToolPrompt" can be a template, and is applied to an association containing all properties of the LLMConfiguration.
  • "ToolRequestParser" specifies a function that takes the most recent completion from the LLM and returns one of the following forms:
  • Noneno tool request
    {{start,end},LLMToolRequest[]}tool request
    {{start,end},Failure[]}invalid tool request
  • The pair of integers {start,end} indicates the character range within the completion string where the tool request appears.
  • Not all LLM services support every parameter that can be specified in the LLMConfiguration.


Basic Examples  (3)

Create a configuration that includes a prompt:

Use the configuration in an LLM evaluation:

Specify multiple properties of a configuration:

Modify an existing configuration:

Wolfram Research (2023), LLMConfiguration, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMConfiguration.html.


Wolfram Research (2023), LLMConfiguration, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMConfiguration.html.


Wolfram Language. 2023. "LLMConfiguration." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/LLMConfiguration.html.


Wolfram Language. (2023). LLMConfiguration. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LLMConfiguration.html


@misc{reference.wolfram_2024_llmconfiguration, author="Wolfram Research", title="{LLMConfiguration}", year="2023", howpublished="\url{https://reference.wolfram.com/language/ref/LLMConfiguration.html}", note=[Accessed: 18-July-2024 ]}


@online{reference.wolfram_2024_llmconfiguration, organization={Wolfram Research}, title={LLMConfiguration}, year={2023}, url={https://reference.wolfram.com/language/ref/LLMConfiguration.html}, note=[Accessed: 18-July-2024 ]}