"MistralAI" (Service Connection)

This service connection requires an external account »

Use the Mistral AI API with the Wolfram Language.

Connecting & Authenticating

ServiceConnect["MistralAI"] creates a connection to the Mistral AI API. If a previously saved connection can be found, it will be used; otherwise, a new authentication request will be launched.
Use of this connection requires internet access and a Mistral AI account.

Requests

ServiceExecute["MistralAI","request",params] sends a request to the Mistral AI API using parameters params. The following gives possible requests.
Request:

"TestConnection" returns Success for working connection, Failure otherwise

Text

Request:

"Chat" create a response for the given chat conversation

Parameters:
  • "Messages"(required)a list of messages in the conversation
    "MaxTokens"Automaticmaximum number of tokens to generate
    "Model"Automaticname of the model to use
    "Stream"Falsereturn the result as server-sent events
    "Temperature"Automaticsampling temperature (between 0 and 1)
    "ToolChoice"Automaticwhich (if any) tool is called by the model
    "Tools"Automaticone or more LLMTool objects available to the model
    "TopProbabilities"Automaticsample only among the k highest-probability classes
    "TotalProbabilityCutoff"Nonesample among the most probable classes with an accumulated probability of at least p (nucleus sampling)
  • Request:

    "Embedding" create an embedding vector representing the input text

    Parameters:
  • "Input"(required)one or a list of texts to get embeddings for
    "Model"Automaticname of the model to use
  • Model Lists

    Request:

    "ChatModelList" list models available for the "Chat" request

    Request:

    "EmbeddingModelList" list models available for the "Embedding" request

    Examples

    open allclose all

    Basic Examples  (1)

    Create a new connection:

    Generate a response from a chat:

    Compute the embedding for a sentence:

    Scope  (4)

    Connection  (1)

    Test the connection:

    Text  (3)

    Chat  (1)

    Respond to a chat containing multiple messages:

    Allow the model to use an LLMTool:

    ChatModelList  (1)

    Look up the available chat models list:

    EmbeddingModelList  (1)

    Look up the available embedding models list: