"AlephAlpha" (Service Connection)
This service connection requires an external account »
Use the AlephAlpha API with the Wolfram Language.
Connecting & Authenticating
ServiceConnect["AlephAlpha"] creates a connection to the AlephAlpha API. If a previously saved connection can be found, it will be used; otherwise, a new authentication request will be launched.
Requests
ServiceExecute["AlpehAlpha","request",params] sends a request to the AlephAlpha API, using parameters params. The following gives possible requests.
"TestConnection" — returns Success for working connection, Failure otherwise
Text
"Completion" — create text completion for a given prompt
"Prompt" | (required) | the prompt for which to generate completions | |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of completions to return | |
"Nice" | Automatic | request priority level (True deprioritizes it) | |
"StopTokens" | Automatic | strings where the API will stop generating further tokens | |
"Temperature" | Automatic | sampling temperature | |
"TopProbabilities" | Automatic | sample only among the k highest-probability classes | |
"TotalProbabilityCutoff" | Automatic | an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass |
"Chat" — create a response for the given chat conversation
"Messages" | (required) | a list of messages in the conversation, each given as an association with "Role" and "Content" keys | |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of chat completions to return | |
"Nice" | Automatic | request priority level (True deprioritizes it) | |
"StopTokens" | Automatic | strings where the API will stop generating further tokens | |
"Temperature" | Automatic | sampling temperature (between 0 and 2) | |
"TopProbabilities" | Automatic | sample only among the k highest-probability classes | |
"TotalProbabilityCutoff" | Automatic | an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass |
"Embedding" — create an embedding vector representing the input text
"Input" | (required) | one or a list of texts to get embeddings for | |
"EmbeddingLayers" | Automatic | list of layer indices to return embedding from | |
"EmbeddingPooling" | Automatic | pooling operation to use; possible values include "mean", "max" or "last_token" | |
"Model" | Automatic | name of the model to use | |
"NormalizeEmbedding" | False | return normalized embeddings |
Model Lists
"ChatModelList" — list models available for the "Chat" request