"TogetherAI" (Service Connection)

This service connection requires an external account »

Use the Together AI API with the Wolfram Language.

Connecting & Authenticating

ServiceConnect["TogetherAI"] creates a connection to the Together AI API. If a previously saved connection can be found, it will be used; otherwise, a new authentication request will be launched.
Use of this connection requires internet access and a Together AI account.

Requests

ServiceExecute["TogetherAI","request",params] sends a request to the Together AI API, using parameters params. The following gives possible requests.

Text

Request:

"Completion" create text completion for a given prompt

Parameters:
  • "Prompt"(required)the prompt for which to generate completions
    "MaxTokens"Automaticmaximum number of tokens to generate
    "Model"Automaticname of the model to use
    "N"Automaticnumber of completions to return
    "SafetyModel"Automaticmoderation model to use; possible values include "Meta-Llama/Llama-Guard-7b"
    "StopTokens"Automaticstrings where the API will stop generating further tokens
    "Temperature"Automaticsampling temperature
    "TotalProbabilityCutoff"Automatican alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass
  • Request:

    "Chat" create a response for the given chat conversation

    Parameters:
  • "Messages"(required)a list of messages in the conversation, each given as an association with "Role" and "Content" keys
    "MaxTokens"Automaticmaximum number of tokens to generate
    "Model"Automaticname of the model to use
    "N"Automaticnumber of completions to return
    "SafetyModel"Automaticmoderation model to use; possible values include "Meta-Llama/Llama-Guard-7b"
    "StopTokens"Automaticstrings where the API will stop generating further tokens
    "Stream"Falsereturn the result as server-sent events
    "Temperature"Automaticsampling temperature
    "ToolChoice"Automaticwhich (if any) tool is called by the model
    "Tools"Automaticone or more LLMTool objects available to the model
    "TopProbabilities"Automaticsample only among the k highest-probability classes
    "TotalProbabilityCutoff"Automatican alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass
  • Request:

    "Embedding" create an embedding vector representing the input text

    Optional parameters:
  • "Input"(required)one or a list of texts to get embeddings for
    "Model"Automaticname of the model to use
  • Image

    Request:

    "ImageCreate" create a square image given a prompt

    Parameters:
  • "Prompt"(required)text description of the desired image
    "Model"Automaticname of the model to use
    "N"Automaticnumber of images to generate
    "Seed"Automaticseed for the image generation
    "Size"Automaticsize of the generated image
    "Steps"Automaticnumber of iterations
  • Model Lists

    Request:

    "ChatModelList" list models available for the "Chat" request

    Request:

    "CompletionModelList" list models available for the "Completion" request

    Request:

    "EmbeddingModelList" list models available for the "Embedding" request

    Request:

    "ImageModelList" list models available for the "ImageCreate" request

    Examples

    open allclose all

    Basic Examples  (1)

    Create a new connection:

    Complete a piece of text:

    Generate a response from a chat:

    Compute the embedding for a sentence:

    Generate an Image from a prompt:

    Scope  (7)

    Text  (5)

    Completion  (1)

    Change the sampling temperature:

    Increase the number of characters returned:

    Chat  (1)

    Respond to a chat containing multiple messages:

    Allow the model to use an LLMTool:

    ChatModelList  (1)

    Look up the available chat models list:

    CompletionModelList  (1)

    Look up the available text completion models list:

    EmbeddingModelList  (1)

    Look up the available embedding models list:

    Image  (2)

    Return multiple results:

    ImageModelList  (1)

    Look up the available image generation models list: