OBSOLETE SERVICE CONNECTION " PaLM" (Service Connection)
Use the PaLM API with the Wolfram Language.
Connecting & Authenticating
ServiceConnect [ "PaLM"] creates a connection to the PaLM API. If a previously saved connection can be found, it will be used; otherwise, a new authentication request will be launched.
Use of this connection requires internet access and a Google account.
Requests
"Completion" — create text completion for a given prompt
"Prompt" the prompt for which to generate completions
"MaxTokens" Automatic maximum number of tokens to generate
"Model" Automatic name of the model to use
"N" Automatic number of completions to return (1 to 8)
"StopTokens" None up to four strings where the API will stop generating further tokens
"Temperature" Automatic sampling temperature (between 0 and 1)
"TopProbabilities" Automatic sample only among the k highest-probability classes
"TotalProbabilityCutoff" None sample among the most probable classes with an accumulated probability of at least p (nucleus sampling)
"Chat" — create a response for the given chat conversation
"Messages" a list of messages in the conversation
"Model" Automatic name of the model to use
"N" Automatic number of completions to return (1 to 8)
"Temperature" Automatic sampling temperature (between 0 and 1)
"TopProbabilities" Automatic sample only among the k highest-probability classes
"TotalProbabilityCutoff" None sample among the most probable classes with an accumulated probability of at least p (nucleus sampling)
"TextEmbedding" — create an embedding vector representing the input text
"Text" text that the model will turn into an embedding
"TokenCount" — run a model's tokenizer on a prompt and return the token count
"Input" text or messages to tokenize
Examples open all close all
Basic Examples (2)
Create a new connection:
Complete a piece of text:
Generate a response from a chat:
Scope (11)
Completion (3)
Change the sampling temperature:
Increase the number of characters returned:
Return multiple completions:
Chat (3)
Respond to a chat containing multiple messages:
Change the sampling temperature:
Return multiple completions:
TextEmbedding (2)
Compute the vector embedding of some text:
Compute the distance between vector embeddings to find semantic similarities:
TokenCount (3)
Get a token count for a string prompt:
Get a token count for a messages prompt:
When not specified, the model is chosen automatically: