ChatEvaluate

This functionality requires an external account »

ChatEvaluate[chat,prompt]

appends prompt and its follow-up to the ChatObject chat.

ChatEvaluate[prompt]

represents an operator form of ChatEvaluate that can be applied to a ChatObject.

Details and Options

  • ChatEvaluate is used to continue the conversation in a ChatObject.
  • ChatEvaluate requires external service authentication, billing and internet connectivity.
  • Possible values for prompt include:
  • "text"static text
    LLMPrompt["name"]a repository prompt
    StringTemplate[]templated text
    TemplateObject[]template for creating a prompt
    Image[]an image
    {prompt1,}a list of prompts
  • Prompt created with TemplateObject can contain text and images. Not every LLM supports image input.
  • The following options can be specified:
  • Authentication Inheritedexplicit user ID and API key
    LLMEvaluator InheritedLLM configuration to use
    ProgressReporting$ProgressReportinghow to report the progress of the computation
  • If LLMEvaluator is set to Inherited, the LLM configuration specified in chat is used.
  • LLMEvaluator can be set to an LLMConfiguration object or an association with any of the following keys:
  • "MaxTokens"maximum amount of tokens to generate
    "Model"base model
    "PromptDelimiter"string to insert between prompts
    "Prompts"initial prompts or LLMPromptGenerator objects
    "StopTokens"tokens on which to stop generation
    "Temperature"sampling temperature
    "ToolMethod"method to use for tool calling
    "Tools"list of LLMTool objects to make available
    "TopProbabilities"sampling classes cutoff
    "TotalProbabilityCutoff"sampling probability cutoff (nucleus sampling)
  • Valid forms of "Model" include:
  • namenamed model
    {service,name}named model from service
    <|"Service"service,"Name"name|>fully specified model
  • Prompts specified in "Prompts" are prepended to the messages in chat with role set as "System".
  • Multiple prompts are separated by the "PromptDelimiter" property.
  • The generated text is sampled from a distribution. Details of the sampling can be specified using the following properties of the LLMEvaluator:
  • "Temperature"tAutomaticsample using a positive temperature t
    "TopProbabilities"kAutomaticsample only among the k highest-probability classes
    "TotalProbabilityCutoff"pAutomaticsample among the most probable choices with an accumulated probability of at least p (nucleus sampling)
  • The Automatic value of these parameters uses the default for the specified "Model".
  • Possible values for "ToolMethod" include:
  • "Service"rely on the tool mechanism of service
    "Textual"used prompt-based tool calling
  • Possible values for Authentication are:
  • Automaticchoose the authentication scheme automatically
    Inheritedinherit settings from chat
    Environmentcheck for a key in the environment variables
    SystemCredentialcheck for a key in the system keychain
    ServiceObject[]inherit the authentication from a service object
    assocprovide explicit key and user ID
  • With AuthenticationAutomatic, the function checks the variable ToUpperCase[service]<>"_API_KEY" in Environment and SystemCredential; otherwise, it uses ServiceConnect[service].
  • When using Authenticationassoc, assoc can contain the following keys:
  • "ID"user identity
    "APIKey"API key used to authenticate
  • ChatEvaluate uses machine learning. Its methods, training sets and biases included therein may change and yield varied results in different versions of the Wolfram Language.

Examples

open allclose all

Basic Examples  (3)

Create a new chat:

Add a message and a response to the conversation:

Create chat specifying a multimodal model:

Now both text and images can be used in the conversation:

Create a chat object with a tool:

Show the LLM answer together with the tool-calling steps:

Scope  (3)

Start a new conversation:

Continue an existing conversation:

Use the function as an operator:

Options  (10)

Authentication  (4)

Provide an authentication key for the API:

Look for the key in the system keychain:

Specify the name of the key:

Look for the key in the system environment:

Authenticate via a service object:

LLMEvaluator  (6)

Specify the service used to generate the answer:

Specify both the service and the model:

By default, the text generation continues until a termination token is generated:

Limit the amount of generated samples (tokens):

Specify that the sampling should be performed at zero temperature:

Specify a high temperature to get more variation in the generation:

Specify the maximum cumulative probability before cutting off the distribution:

Specify the service and the model to use for the generation:

Specify a prompt to be automatically inserted:

Applications  (1)

Tool Calling  (1)

Define a tool that can be called by the LLM:

Instantiate a chat object with the tool:

Ask a question that can get a precise answer using the tool:

Possible Issues  (1)

Evaluating a chat session with a specific service embeds the authentication information:

With the default setting AuthenticationInherited, the authentication will not work on a different service:

Use AuthenticationAutomatic or provide explicit authentication for the new service to reconnect:

Wolfram Research (2023), ChatEvaluate, Wolfram Language function, https://reference.wolfram.com/language/ref/ChatEvaluate.html.

Text

Wolfram Research (2023), ChatEvaluate, Wolfram Language function, https://reference.wolfram.com/language/ref/ChatEvaluate.html.

CMS

Wolfram Language. 2023. "ChatEvaluate." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/ChatEvaluate.html.

APA

Wolfram Language. (2023). ChatEvaluate. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/ChatEvaluate.html

BibTeX

@misc{reference.wolfram_2024_chatevaluate, author="Wolfram Research", title="{ChatEvaluate}", year="2023", howpublished="\url{https://reference.wolfram.com/language/ref/ChatEvaluate.html}", note=[Accessed: 21-November-2024 ]}

BibLaTeX

@online{reference.wolfram_2024_chatevaluate, organization={Wolfram Research}, title={ChatEvaluate}, year={2023}, url={https://reference.wolfram.com/language/ref/ChatEvaluate.html}, note=[Accessed: 21-November-2024 ]}