"Cohere" (Service Connection)

This service connection requires an external account »

Use the Cohere API with the Wolfram Language.

Connecting & Authenticating

ServiceConnect["Cohere"] creates a connection to the Cohere API. If a previously saved connection can be found, it will be used; otherwise, a new authentication request will be launched.
Use of this connection requires internet access and a Cohere account.

Requests

ServiceExecute["Cohere","request",params] sends a request to the Cohere API, using parameters params. The following gives possible requests.
Request:

"TestConnection" returns Success for working connection, Failure otherwise

Text

Request:

"Completion" create text completion for a given prompt

Parameters:
  • "Prompt"(required)the prompt for which to generate completions
    "MaxTokens"Automaticmaximum number of tokens to generate
    "FrequencyPenalty"Automaticpenalize tokens based on their existing frequency in the text so far (between -2 and 2)
    "Model"Automaticname of the model to use
    "N"Automaticnumber of completions to return
    "PresencePenalty"Automaticpenalize new tokens based on whether they appear in the text so far
    "StopTokens"Automaticstrings where the API will stop generating further tokens
    "Stream"Falsereturn the result as server-sent events
    "Temperature"Automaticsampling temperature
    "TopProbabilities"Automaticsample only among the k highest-probability classes
    "TotalProbabilityCutoff"Automatican alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass
  • Request:

    "Chat" create a response for the given chat conversation

    Parameters:
  • "Messages"(required)a list of messages in the conversation, each given as an association with "Role" and "Content" keys
    "MaxTokens"Automaticmaximum number of tokens to generate
    "Model"Automaticname of the model to use
    "StopTokens"Automaticstrings where the API will stop generating further tokens
    "Stream"Falsereturn the result as server-sent events
    "Temperature"Automaticsampling temperature
    "Tools"Automaticone or more LLMTool objects available to the model
    "TopProbabilities"Automaticsample only among the k highest-probability classes
    "TotalProbabilityCutoff"Automatican alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass
  • Request:

    "Embedding" create an embedding vector representing the input text

    Optional parameters:
  • "Input"(required)one text or a list of texts to get embeddings for
    "Model"Automaticname of the model to use
  • Model Lists

    Request:

    "ChatModelList" list models available for the "Chat" request

    Request:

    "EmbeddingModelList" list models available for the "Embedding" request

    Examples

    open allclose all

    Basic Examples  (1)

    Create a new connection:

    Complete a piece of text:

    Generate a response from a chat:

    Compute the embedding for a sentence:

    Scope  (5)

    Connection  (1)

    Test the connection

    Text  (4)

    Completion  (1)

    Return multiple completions, decreasing the number of characters in each and specify a stop token:

    Chat  (1)

    Respond to a chat containing multiple messages:

    Allow the model to use an LLMTool:

    ChatModelList  (1)

    Look up the available chat models list:

    EmbeddingModelList  (1)

    Look up the available embedding models list: