---
title: "LLMFunction"
language: "en"
type: "Symbol"
summary: "LLMFunction[prompt] represents a template for a large language model (LLM) prompt. LLMFunction[{prompt1, prompt2, ...}] represents a combination of multiple prompts. LLMFunction[prompt, form] includes the interpreter form to apply to the response. LLMFunction[...][params] give the LLM service response for prompt applied to parameters params."
keywords: 
- Autocomplete
- Computed responses
- Context
- Contextual data handling
- Data-based prompt responses
- Data-driven text generation
- Deep learning
- Dynamic content creation
- External tools
- GPT
- Language model
- Language models
- Language processing
- LLM
- Machine learning
- Machine translation
- Natural Language Processing (NLP)
- Natural language understanding
- Neural network
- Prediction
- Prompt
- Real-time data integration
- Semantic processing
- Sentence generation
- Sequence generation
- Text analysis
- Text generation
- Text prediction
- Text synthesis
- Tool interfacing
canonical_url: "https://reference.wolfram.com/language/ref/LLMFunction.html"
source: "Wolfram Language Documentation"
related_guides: 
  - 
    title: "Machine Learning"
    link: "https://reference.wolfram.com/language/guide/MachineLearning.en.md"
  - 
    title: "Text Manipulation"
    link: "https://reference.wolfram.com/language/guide/ProcessingTextualData.en.md"
  - 
    title: "LLM-Related Functionality"
    link: "https://reference.wolfram.com/language/guide/LLMFunctions.en.md"
  - 
    title: "Text Analysis"
    link: "https://reference.wolfram.com/language/guide/TextAnalysis.en.md"
  - 
    title: "Natural Language Processing"
    link: "https://reference.wolfram.com/language/guide/NaturalLanguageProcessing.en.md"
  - 
    title: "String Manipulation"
    link: "https://reference.wolfram.com/language/guide/StringManipulation.en.md"
  - 
    title: "Working with Templates"
    link: "https://reference.wolfram.com/language/guide/WorkingWithTemplates.en.md"
  - 
    title: "Converting between Expressions & Strings"
    link: "https://reference.wolfram.com/language/guide/ConvertingBetweenExpressionsAndStrings.en.md"
  - 
    title: "Linguistic Data"
    link: "https://reference.wolfram.com/language/guide/LinguisticData.en.md"
  - 
    title: "Free-Form & External Input"
    link: "https://reference.wolfram.com/language/guide/FreeFormAndExternalInput.en.md"
  - 
    title: "Setting Up Input Interpreters"
    link: "https://reference.wolfram.com/language/guide/InterpretingStrings.en.md"
  - 
    title: "Knowledge Representation & Access"
    link: "https://reference.wolfram.com/language/guide/KnowledgeRepresentationAndAccess.en.md"
  - 
    title: "Text Generation"
    link: "https://reference.wolfram.com/language/guide/TextConstruction.en.md"
  - 
    title: "Text Normalization"
    link: "https://reference.wolfram.com/language/guide/TextNormalization.en.md"
---
[EXPERIMENTAL]

# LLMFunction

[image]  This functionality requires [LLM access](https://www.wolfram.com/notebook-assistant-llm-kit/)

LLMFunction[prompt] represents a template for a large language model (LLM) prompt.

LLMFunction[{prompt1, prompt2, …}] represents a combination of multiple prompts.

LLMFunction[prompt, form] includes the interpreter form to apply to the response.

LLMFunction[…][params] give the LLM service response for prompt applied to parameters params.

## Details and Options

* An ``LLMFunction`` can be used to generate text using a large language model (LLM). It can create content, complete sentences, extract information and more.

* ``LLMFunction`` requires external service authentication, billing and internet connectivity.

* The ``prompti`` supports the following values:

|                   |                                |
| ----------------- | ------------------------------ |
| "string"          | static string template         |
| LLMPrompt["name"] | a repository prompt            |
| StringTemplate[…] | templated text                 |
| TemplateObject[…] | template for creating a prompt |
| {prompt1, …}      | a list of prompts              |

* A prompt ``"string"`` is equivalent to ``StringTemplate["string"]``.

* Prompts created with ``TemplateObject`` can contain text and images. Not every LLM supports image input.

* The supported values of ``form`` are the same as ``Interpreter``.

* When a ``form`` is specified, the LLM will be instructed to respect it.

* Specifying an output interpretation adds the corresponding schema information to the LLM prompt.

* ``LLMFunction`` supports the following options:

|                   |                |                                                          |
| ----------------- | -------------- | -------------------------------------------------------- |
| InsertionFunction | TextString     | function or format to apply before inserting expressions |
| CombinerFunction  | StringJoin     | function to apply to combine pieces within a prompt      |
| Authentication    | Automatic      | explicit user ID and API key                             |
| LLMEvaluator      | \$LLMEvaluator | LLM configuration to use                                 |

* ``LLMEvaluator`` can be set to an ``LLMConfiguration`` object or an association with any of the following keys:

|                          |                                                |
| ------------------------ | ---------------------------------------------- |
| "MaxTokens"              | maximum amount of tokens to generate           |
| "Model"                  | base model                                     |
| "PromptDelimiter"        | string to insert between prompts               |
| "Prompts"                | initial prompts                                |
| "StopTokens"             | tokens on which to stop generation             |
| "Temperature"            | sampling temperature                           |
| "ToolMethod"             | method to use for tool calling                 |
| "Tools"                  | list of LLMTool objects to make available      |
| "TopProbabilities"       | sampling classes cutoff                        |
| "TotalProbabilityCutoff" | sampling probability cutoff (nucleus sampling) |

* Valid forms of ``"Model"`` include:

|                                                         |                          |
| ------------------------------------------------------- | ------------------------ |
| name                                                    | named model              |
| {service, name}                                         | named model from service |
| <\|"Service" -> service, "Name" -> name, "Task" -> task\|> | fully specified model    |

* The generated text is sampled from a distribution. Details of the sampling can be specified using the following properties of ``LLMEvaluator`` :

|     |     |     |
| --- | --- | --- |
| "Temperature" -> t | Automatic | sample using a positive temperature t |
| "TopProbabilities" -> k | Automatic | sample only among the k highest-probability classes |
| "TotalProbabilityCutoff" -> p | Automatic | sample among the most probable choices with an accumulated probability of at least p (nucleus sampling) |

* Multiple prompts are separated by the ``"PromptDelimiter"`` property of the ``LLMEvaluator``.

* Possible values for ``Authentication`` are:

|                  |                                                  |
| ---------------- | ------------------------------------------------ |
| Automatic        | choose the authentication scheme automatically   |
| Environment      | check for a key in the environment variables     |
| SystemCredential | check for a key in the system keychain           |
| ServiceObject[…] | inherit the authentication from a service object |
| assoc            | provide an explicit key and user ID              |

* With ``Authentication -> Automatic``, the function checks the variable ``ToUpperCase[service] <> "_API_KEY"`` in ``Environment`` and ``SystemCredential``; otherwise, it uses ``ServiceConnect[service]``.

* When using ``Authentication -> assoc``, ``assoc`` can contain the following keys:

|          |                              |
| -------- | ---------------------------- |
| "ID"     | user identity                |
| "APIKey" | API key used to authenticate |

* ``LLMFunction`` uses machine learning. Its methods, training sets and biases included therein may change and yield varied results in different versions of the Wolfram Language.

## Examples (8)

### Basic Examples (3)

Create a function for getting cooking instructions:

```wl
In[1]:= LLMFunction["Describe how to cook `1`"]

Out[1]=
LLMFunction[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "Anthropic", 
       "Name" -> "claude-3-5-haiku-20241022", "ServiceObject" -> ServiceObject["Anthropic", 
         Association["ID" ->  ... }, 
  "Template" -> TemplateObject[{"Describe how to cook ", TemplateSlot[1]}, 
    CombinerFunction -> StringJoin, InsertionFunction -> TextString, 
    MetaInformation -> Association[]], "Interpreter" -> "String", "LLMPacletVersion" -> "2.2.0"]]
```

---

Create a helper tool:

```wl
In[1]:= findbyproperty = LLMFunction["What is something that is `1` and `2`"]

Out[1]=
LLMFunction[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4o-mini", 
       "ServiceObject" -> ServiceObject["OpenAI", Association[
          "ID" -> "yXqt3eiO8bodRCCF3z ... ct[{"What is something that is ", TemplateSlot[1], " and ", 
     TemplateSlot[2]}, CombinerFunction -> StringJoin, InsertionFunction -> TextString, 
    MetaInformation -> Association[]], "Interpreter" -> "String", "LLMPacletVersion" -> "2.2.0"]]
```

Use the function:

```wl
In[2]:= findbyproperty["Blue", "Round"]

Out[2]= "One example of something that is blue and round is a blue marble. Other examples could include a blue balloon, a blue beach ball, or a blue planet like Earth when viewed from space."
```

---

Create a function that returns a city as an ``Entity`` :

```wl
In[1]:= cityfinder = LLMFunction["Find a city between `1` and `2`. Answer with the city name only", "City"]

Out[1]=
LLMFunction[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4o-mini", 
       "ServiceObject" -> ServiceObject["OpenAI", Association[
          "ID" -> "yXqt3eiO8bodRCCF3z ... plateSlot[1], " and ", TemplateSlot[2], 
     ". Answer with the city name only"}, CombinerFunction -> StringJoin, 
    InsertionFunction -> TextString, MetaInformation -> Association[]], "Interpreter" -> "City", 
  "LLMPacletVersion" -> "2.2.0"]]
```

Use the function:

```wl
In[2]:= cityfinder["St. Louis", "Chicago"]

Out[2]= Entity["City", {"Springfield", "Illinois", "UnitedStates"}]
```

### Scope (4)

Use named parameters:

```wl
In[1]:= teamfinder = LLMFunction["What team did `Pitcher` and `Catcher` play for?"]

Out[1]=
LLMFunction[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4o-mini", 
       "ServiceObject" -> ServiceObject["OpenAI", Association[
          "ID" -> "yXqt3eiO8bodRCCF3z ...  TemplateSlot["Pitcher"], " and ", 
     TemplateSlot["Catcher"], " play for?"}, CombinerFunction -> StringJoin, 
    InsertionFunction -> TextString, MetaInformation -> Association[]], "Interpreter" -> "String", 
  "LLMPacletVersion" -> "2.2.0"]]
```

Apply it using an ``Association`` :

```wl
In[2]:= teamfinder[<|"Pitcher" -> "Whitey Ford", "Catcher" -> "Yogi Berra"|>]

Out[2]= "Whitey Ford and Yogi Berra both played for the New York Yankees. They were key figures in the team's success during the 1950s and 1960s."
```

---

Set a default for a parameter:

```wl
In[1]:= recipe = LLMFunction[StringTemplate["Create a short recipe including `Veggie` annd `Spice`", <|"Spice" -> "Chili Powder"|>]]

Out[1]=
LLMFunction[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4o-mini", 
       "ServiceObject" -> ServiceObject["OpenAI", Association[
          "ID" -> "yXqt3eiO8bodRCCF3z ...      " annd ", TemplateSlot["Spice"]}, Association["Spice" -> "Chili Powder"], 
    CombinerFunction -> StringJoin, InsertionFunction -> TextString, 
    MetaInformation -> Association[]], "Interpreter" -> "String", "LLMPacletVersion" -> "2.2.0"]]

In[2]:= recipe[<|"Veggie" -> "Corn"|>]

Out[2]=
"### Spicy Corn Salad

#### Ingredients:
- 2 cups fresh or frozen corn (thawed if frozen)
- 1 tablespoon olive oil
- 1 teaspoon chili powder
- 1/2 teaspoon salt
- 1/4 teaspoon black pepper
- 1/2 cup diced red bell pepper
- 1/4 cup chopped fresh cil ... ntil the corn and peppers are evenly coated with the dressing.

4. **Serve**: Let the salad sit for about 10 minutes to allow the flavors to meld. Serve chilled or at room temperature.

Enjoy your spicy corn salad as a side dish or a light snack!"
```

---

Use a multi-part prompt with images:

```wl
In[1]:= LLMFunction[TemplateObject@{"what is this?", TemplateSlot[1]}][[image]]

Out[1]= "This is a slice of lemon. Lemons are citrus fruits known for their bright yellow color and tart flavor. They are commonly used in cooking, baking, and beverages."
```

---

Use an ``Interpreter`` to both constrain and validate the result:

```wl
In[1]:= f = LLMFunction["Generate `` random instances", RepeatingElement["Color"]]

Out[1]=
LLMFunction[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "LLMKitAzureOpenAI", "Name" -> Automatic, 
       "Alias" -> "Wolfram LLM Service", "ServiceObject" -> ServiceObject["LLMKitAzureOpenAI" ... bject[{"Generate ", TemplateSlot[1], " random instances"}, 
    CombinerFunction -> StringJoin, InsertionFunction -> TextString, 
    MetaInformation -> Association[]], "Interpreter" -> RepeatingElement["Color"], 
  "LLMPacletVersion" -> "2.2.0"]]

In[2]:= f[10]

Out[2]= {RGBColor[1, 0, 0], RGBColor[0, 0, 1], RGBColor[0, 1, 0], RGBColor[1, 1, 0], RGBColor[0.5, 0, 0.5], RGBColor[1, 0.5, 0], RGBColor[1, 0.5, 0.5], RGBColor[0.6, 0.4, 0.2], RGBColor[0, 1, 1], RGBColor[1, 0, 1]}
```

### Properties & Relations (1)

``LLMFunction`` with no parameters sends the prompt directly to the LLM:

```wl
In[1]:= LLMFunction["Hello"][]

Out[1]= "Hello! How can I assist you today?"
```

This is equivalent to ``LLMSynthesize`` with zero temperature:

```wl
In[2]:= LLMSynthesize["Hello", LLMEvaluator -> <|"Temperature" -> 0|>]

Out[2]= "Hello! How can I assist you today?"
```

## See Also

* [`LLMExampleFunction`](https://reference.wolfram.com/language/ref/LLMExampleFunction.en.md)
* [`LLMSynthesize`](https://reference.wolfram.com/language/ref/LLMSynthesize.en.md)
* [`LLMGraph`](https://reference.wolfram.com/language/ref/LLMGraph.en.md)
* [`LLMPrompt`](https://reference.wolfram.com/language/ref/LLMPrompt.en.md)
* [`LLMTool`](https://reference.wolfram.com/language/ref/LLMTool.en.md)
* [`OpenAI`](https://reference.wolfram.com/language/ref/service/OpenAI.en.md)
* [`Anthropic`](https://reference.wolfram.com/language/ref/service/Anthropic.en.md)
* [`GoogleGemini`](https://reference.wolfram.com/language/ref/service/GoogleGemini.en.md)
* [`AlephAlpha`](https://reference.wolfram.com/language/ref/service/AlephAlpha.en.md)
* [`Cohere`](https://reference.wolfram.com/language/ref/service/Cohere.en.md)
* [`DeepSeek`](https://reference.wolfram.com/language/ref/service/DeepSeek.en.md)
* [`Groq`](https://reference.wolfram.com/language/ref/service/Groq.en.md)
* [`MistralAI`](https://reference.wolfram.com/language/ref/service/MistralAI.en.md)
* [`TogetherAI`](https://reference.wolfram.com/language/ref/service/TogetherAI.en.md)

## Related Guides

* [Machine Learning](https://reference.wolfram.com/language/guide/MachineLearning.en.md)
* [Text Manipulation](https://reference.wolfram.com/language/guide/ProcessingTextualData.en.md)
* [LLM-Related Functionality](https://reference.wolfram.com/language/guide/LLMFunctions.en.md)
* [Text Analysis](https://reference.wolfram.com/language/guide/TextAnalysis.en.md)
* [Natural Language Processing](https://reference.wolfram.com/language/guide/NaturalLanguageProcessing.en.md)
* [String Manipulation](https://reference.wolfram.com/language/guide/StringManipulation.en.md)
* [Working with Templates](https://reference.wolfram.com/language/guide/WorkingWithTemplates.en.md)
* [Converting between Expressions & Strings](https://reference.wolfram.com/language/guide/ConvertingBetweenExpressionsAndStrings.en.md)
* [Linguistic Data](https://reference.wolfram.com/language/guide/LinguisticData.en.md)
* [Free-Form & External Input](https://reference.wolfram.com/language/guide/FreeFormAndExternalInput.en.md)
* [Setting Up Input Interpreters](https://reference.wolfram.com/language/guide/InterpretingStrings.en.md)
* [Knowledge Representation & Access](https://reference.wolfram.com/language/guide/KnowledgeRepresentationAndAccess.en.md)
* [Text Generation](https://reference.wolfram.com/language/guide/TextConstruction.en.md)
* [Text Normalization](https://reference.wolfram.com/language/guide/TextNormalization.en.md)

## History

* [Introduced in 2023 (13.3)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn133.en.md) \| [Updated in 2025 (14.3)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn143.en.md)