-
See Also
- LLMExampleFunction
- LLMSynthesize
- LLMGraph
- LLMPrompt
- LLMTool
-
- Service Connections
- OpenAI
- Anthropic
- GoogleGemini
- AlephAlpha
- Cohere
- DeepSeek
- Groq
- MistralAI
- TogetherAI
-
Related Guides
- Machine Learning
- Text Manipulation
- LLM-Related Functionality
- Text Analysis
- Natural Language Processing
- String Manipulation
- Working with Templates
- Converting between Expressions & Strings
- Linguistic Data
- Free-Form & External Input
- Setting Up Input Interpreters
- Knowledge Representation & Access
- Text Generation
- Text Normalization
-
-
See Also
- LLMExampleFunction
- LLMSynthesize
- LLMGraph
- LLMPrompt
- LLMTool
-
- Service Connections
- OpenAI
- Anthropic
- GoogleGemini
- AlephAlpha
- Cohere
- DeepSeek
- Groq
- MistralAI
- TogetherAI
-
Related Guides
- Machine Learning
- Text Manipulation
- LLM-Related Functionality
- Text Analysis
- Natural Language Processing
- String Manipulation
- Working with Templates
- Converting between Expressions & Strings
- Linguistic Data
- Free-Form & External Input
- Setting Up Input Interpreters
- Knowledge Representation & Access
- Text Generation
- Text Normalization
-
See Also
LLMFunction[prompt]
represents a template for a large language model (LLM) prompt.
LLMFunction[{prompt1,prompt2,…}]
represents a combination of multiple prompts.
LLMFunction[prompt,form]
includes the interpreter form to apply to the response.
LLMFunction[…][params]
give the LLM service response for prompt applied to parameters params.


LLMFunction
LLMFunction[prompt]
represents a template for a large language model (LLM) prompt.
LLMFunction[{prompt1,prompt2,…}]
represents a combination of multiple prompts.
LLMFunction[prompt,form]
includes the interpreter form to apply to the response.
LLMFunction[…][params]
give the LLM service response for prompt applied to parameters params.
Details and Options




- An LLMFunction can be used to generate text using a large language model (LLM). It can create content, complete sentences, extract information and more.
- LLMFunction requires external service authentication, billing and internet connectivity.
- The prompti supports the following values:
-
"string" static string template LLMPrompt["name"] a repository prompt StringTemplate[…] templated text TemplateObject[…] template for creating a prompt {prompt1,…} a list of prompts - A prompt "string" is equivalent to StringTemplate["string"].
- Prompts created with TemplateObject can contain text and images. Not every LLM supports image input.
- The supported values of form are the same as Interpreter.
- When a form is specified, the LLM will be instructed to respect it.
- Specifying an output interpretation adds the corresponding schema information to the LLM prompt.
- LLMFunction supports the following options:
-
InsertionFunction TextString function or format to apply before inserting expressions CombinerFunction StringJoin function to apply to combine pieces within a prompt Authentication Automatic explicit user ID and API key LLMEvaluator $LLMEvaluator LLM configuration to use - LLMEvaluator can be set to an LLMConfiguration object or an association with any of the following keys:
-
"MaxTokens" maximum amount of tokens to generate "Model" base model "PromptDelimiter" string to insert between prompts "Prompts" initial prompts "StopTokens" tokens on which to stop generation "Temperature" sampling temperature "ToolMethod" method to use for tool calling "Tools" list of LLMTool objects to make available "TopProbabilities" sampling classes cutoff "TotalProbabilityCutoff" sampling probability cutoff (nucleus sampling) - Valid forms of "Model" include:
-
name named model {service,name} named model from service <|"Service"service,"Name"name,"Task"task|> fully specified model - The generated text is sampled from a distribution. Details of the sampling can be specified using the following properties of LLMEvaluator:
-
"Temperature"t Automatic sample using a positive temperature t "TopProbabilities"k Automatic sample only among the k highest-probability classes "TotalProbabilityCutoff"p Automatic sample among the most probable choices with an accumulated probability of at least p (nucleus sampling) - Multiple prompts are separated by the "PromptDelimiter" property of the LLMEvaluator.
- Possible values for Authentication are:
-
Automatic choose the authentication scheme automatically Environment check for a key in the environment variables SystemCredential check for a key in the system keychain ServiceObject[…] inherit the authentication from a service object assoc provide an explicit key and user ID - With AuthenticationAutomatic, the function checks the variable ToUpperCase[service]<>"_API_KEY" in Environment and SystemCredential; otherwise, it uses ServiceConnect[service].
- When using Authenticationassoc, assoc can contain the following keys:
-
"ID" user identity "APIKey" API key used to authenticate - LLMFunction uses machine learning. Its methods, training sets and biases included therein may change and yield varied results in different versions of the Wolfram Language.
Examples
open all close allBasic Examples (3)
Create a function for getting cooking instructions:
Create a function that returns a city as an Entity:
Scope (4)
Apply it using an Association:
Set a default for a parameter:
Use a multi-part prompt with images:
Use an Interpreter to both constrain and validate the result:
Properties & Relations (1)
LLMFunction with no parameters sends the prompt directly to the LLM:
This is equivalent to LLMSynthesize with zero temperature:
See Also
LLMExampleFunction LLMSynthesize LLMGraph LLMPrompt LLMTool
Service Connections: OpenAI Anthropic GoogleGemini AlephAlpha Cohere DeepSeek Groq MistralAI TogetherAI
Related Guides
-
▪
- Machine Learning ▪
- Text Manipulation ▪
- LLM-Related Functionality ▪
- Text Analysis ▪
- Natural Language Processing ▪
- String Manipulation ▪
- Working with Templates ▪
- Converting between Expressions & Strings ▪
- Linguistic Data ▪
- Free-Form & External Input ▪
- Setting Up Input Interpreters ▪
- Knowledge Representation & Access ▪
- Text Generation ▪
- Text Normalization
Text
Wolfram Research (2023), LLMFunction, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMFunction.html (updated 2025).
CMS
Wolfram Language. 2023. "LLMFunction." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2025. https://reference.wolfram.com/language/ref/LLMFunction.html.
APA
Wolfram Language. (2023). LLMFunction. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LLMFunction.html
BibTeX
@misc{reference.wolfram_2025_llmfunction, author="Wolfram Research", title="{LLMFunction}", year="2025", howpublished="\url{https://reference.wolfram.com/language/ref/LLMFunction.html}", note=[Accessed: 14-August-2025]}
BibLaTeX
@online{reference.wolfram_2025_llmfunction, organization={Wolfram Research}, title={LLMFunction}, year={2025}, url={https://reference.wolfram.com/language/ref/LLMFunction.html}, note=[Accessed: 14-August-2025]}