"AlephAlpha" (Service Connection)

This service connection requires an external account »

Use the AlephAlpha API with the Wolfram Language.

Connecting & Authenticating

ServiceConnect["AlephAlpha"] creates a connection to the AlephAlpha API. If a previously saved connection can be found, it will be used; otherwise, a new authentication request will be launched.
Use of this connection requires internet access and an AlephAlpha account.

Requests

ServiceExecute["AlpehAlpha","request",params] sends a request to the AlephAlpha API, using parameters params. The following gives possible requests.
Request:

"TestConnection" returns Success for working connection, Failure otherwise

Text

Request:

"Completion" create text completion for a given prompt

Parameters:
  • "Prompt"(required)the prompt for which to generate completions
    "MaxTokens"Automaticmaximum number of tokens to generate
    "Model"Automaticname of the model to use
    "N"Automaticnumber of completions to return
    "Nice"Automaticrequest priority level (True deprioritizes it)
    "StopTokens"Automaticstrings where the API will stop generating further tokens
    "Temperature"Automaticsampling temperature
    "TopProbabilities"Automaticsample only among the k highest-probability classes
    "TotalProbabilityCutoff"Automatican alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass
  • Request:

    "Chat" create a response for the given chat conversation

    Parameters:
  • "Messages"(required)a list of messages in the conversation, each given as an association with "Role" and "Content" keys
    "MaxTokens"Automaticmaximum number of tokens to generate
    "Model"Automaticname of the model to use
    "N"Automaticnumber of chat completions to return
    "Nice"Automaticrequest priority level (True deprioritizes it)
    "StopTokens"Automaticstrings where the API will stop generating further tokens
    "Temperature"Automaticsampling temperature (between 0 and 2)
    "TopProbabilities"Automaticsample only among the k highest-probability classes
    "TotalProbabilityCutoff"Automatican alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass
  • Request:

    "Embedding" create an embedding vector representing the input text

    Parameters:
  • "Input"(required)one or a list of texts to get embeddings for
    "EmbeddingLayers"Automaticlist of layer indices to return embedding from
    "EmbeddingPooling"Automaticpooling operation to use; possible values include "mean", "max" or "last_token"
    "Model"Automaticname of the model to use
    "NormalizeEmbedding"Falsereturn normalized embeddings
  • Model Lists

    Request:

    "ChatModelList" list models available for the "Chat" request

    Examples

    open allclose all

    Basic Examples  (1)

    Create a new connection:

    Complete a piece of text:

    Generate a response from a chat:

    Compute the embedding for a sentence:

    Scope  (2)

    Text  (2)

    Completion  (1)

    Increase the number of characters returned:

    Chat  (1)

    Respond to a chat containing multiple messages: