"PaLM" (Service Connection)

Use the PaLM API with the Wolfram Language.

Connecting & Authenticating

ServiceConnect["PaLM"] creates a connection to the PaLM API. If a previously saved connection can be found, it will be used; otherwise, a new authentication request will be launched.
Use of this connection requires internet access and a Google account.

Requests

Request:

"Completion" create text completion for a given prompt

Required parameters:
  • "Prompt"the prompt for which to generate completions
  • Optional parameters:
  • "MaxTokens"Automaticmaximum number of tokens to generate
    "Model"Automaticname of the model to use
    "N"Automaticnumber of completions to return (1 to 8)
    "StopTokens"Noneup to four strings where the API will stop generating further tokens
    "Temperature"Automaticsampling temperature (between 0 and 1)
    "TopProbabilities"Automaticsample only among the k highest-probability classes
    "TotalProbabilityCutoff"Nonesample among the most probable classes with an accumulated probability of at least p (nucleus sampling)
  • Request:

    "Chat" create a response for the given chat conversation

    Required parameters:
  • "Messages"a list of messages in the conversation
  • Optional parameters:
  • "Model"Automaticname of the model to use
    "N"Automaticnumber of completions to return (1 to 8)
    "Temperature"Automaticsampling temperature (between 0 and 1)
    "TopProbabilities"Automaticsample only among the k highest-probability classes
    "TotalProbabilityCutoff"Nonesample among the most probable classes with an accumulated probability of at least p (nucleus sampling)
  • Request:

    "TextEmbedding" create an embedding vector representing the input text

    Required parameters:
  • "Text"text that the model will turn into an embedding
  • Optional parameters:
  • "Model"Automaticname of the model to use
  • Request:

    "TokenCount" run a model's tokenizer on a prompt and return the token count

    Required parameters:
  • "Input"text or messages to tokenize
  • Optional parameters:
  • "Model"Automaticname of the model to use
  • Examples

    open allclose all

    Basic Examples  (2)

    Create a new connection:

    Complete a piece of text:

    Generate a response from a chat:

    Scope  (11)

    Completion  (3)

    Change the sampling temperature:

    Increase the number of characters returned:

    Return multiple completions:

    Chat  (3)

    Respond to a chat containing multiple messages:

    Change the sampling temperature:

    Return multiple completions:

    TextEmbedding  (2)

    Compute the vector embedding of some text:

    Compute the distance between vector embeddings to find semantic similarities:

    TokenCount  (3)

    Get a token count for a string prompt:

    Get a token count for a messages prompt:

    When not specified, the model is chosen automatically: