"TogetherAI" (Service Connection)
This service connection requires an external account »
Use the Together AI API with the Wolfram Language.
Connecting & Authenticating
ServiceConnect["TogetherAI"] creates a connection to the Together AI API. If a previously saved connection can be found, it will be used; otherwise, a new authentication request will be launched.
Requests
ServiceExecute["TogetherAI","request",params] sends a request to the Together AI API, using parameters params. The following gives possible requests.
Text
"Completion" — create text completion for a given prompt
"Prompt" | (required) | the prompt for which to generate completions | |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of completions to return | |
"SafetyModel" | Automatic | moderation model to use; possible values include "Meta-Llama/Llama-Guard-7b" | |
"StopTokens" | Automatic | strings where the API will stop generating further tokens | |
"Temperature" | Automatic | sampling temperature | |
"TotalProbabilityCutoff" | Automatic | an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass |
"Chat" — create a response for the given chat conversation
"Messages" | (required) | a list of messages in the conversation, each given as an association with "Role" and "Content" keys | |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of completions to return | |
"SafetyModel" | Automatic | moderation model to use; possible values include "Meta-Llama/Llama-Guard-7b" | |
"StopTokens" | Automatic | strings where the API will stop generating further tokens | |
"Stream" | False | return the result as server-sent events | |
"Temperature" | Automatic | sampling temperature | |
"ToolChoice" | Automatic | which (if any) tool is called by the model | |
"Tools" | Automatic | one or more LLMTool objects available to the model | |
"TopProbabilities" | Automatic | sample only among the k highest-probability classes | |
"TotalProbabilityCutoff" | Automatic | an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass |
"Embedding" — create an embedding vector representing the input text
"Input" | (required) | one or a list of texts to get embeddings for | |
"Model" | Automatic | name of the model to use |
Image
"ImageCreate" — create a square image given a prompt
"Prompt" | (required) | text description of the desired image | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of images to generate | |
"Seed" | Automatic | seed for the image generation | |
"Size" | Automatic | size of the generated image | |
"Steps" | Automatic | number of iterations |
Model Lists
"ChatModelList" — list models available for the "Chat" request
"CompletionModelList" — list models available for the "Completion" request
"EmbeddingModelList" — list models available for the "Embedding" request
"ImageModelList" — list models available for the "ImageCreate" request
Examples
open allclose allBasic Examples (1)
Generate a response from a chat:
Compute the embedding for a sentence:
Generate an Image from a prompt: