"PaLM" (Service Connection)
-
See Also
- ServiceExecute
- ServiceConnect
- ChatEvaluate
- LLMSynthesize
- LLMConfiguration
-
- Service Connections
- GoogleGemini
- OpenAI
- Anthropic
-
-
See Also
- ServiceExecute
- ServiceConnect
- ChatEvaluate
- LLMSynthesize
- LLMConfiguration
-
- Service Connections
- GoogleGemini
- OpenAI
- Anthropic
-
See Also
"PaLM" (Service Connection)
This service is obsolete as of Version 14.1 (2024). The underlying API was officially discontinued by Google in October 2024. Use "GoogleGemini" instead.
Connecting & Authenticating
ServiceConnect["PaLM"] creates a connection to the PaLM API. If a previously saved connection can be found, it will be used; otherwise, a new authentication request will be launched.
Requests
"Completion" — create text completion for a given prompt
"Prompt" | the prompt for which to generate completions |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of completions to return (1 to 8) | |
"StopTokens" | None | up to four strings where the API will stop generating further tokens | |
"Temperature" | Automatic | sampling temperature (between 0 and 1) | |
"TopProbabilities" | Automatic | sample only among the k highest-probability classes | |
"TotalProbabilityCutoff" | None | sample among the most probable classes with an accumulated probability of at least p (nucleus sampling) |
"Chat" — create a response for the given chat conversation
"Messages" | a list of messages in the conversation |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of completions to return (1 to 8) | |
"Temperature" | Automatic | sampling temperature (between 0 and 1) | |
"TopProbabilities" | Automatic | sample only among the k highest-probability classes | |
"TotalProbabilityCutoff" | None | sample among the most probable classes with an accumulated probability of at least p (nucleus sampling) |
"TextEmbedding" — create an embedding vector representing the input text
"Text" | text that the model will turn into an embedding |
"Model" | Automatic | name of the model to use |
"TokenCount" — run a model's tokenizer on a prompt and return the token count
"Input" | text or messages to tokenize |
"Model" | Automatic | name of the model to use |
Examples
open all close allBasic Examples (2)
Scope (11)
Completion (3)
Chat (3)
TextEmbedding (2)
See Also
ServiceExecute ▪ ServiceConnect ▪ ChatEvaluate ▪ LLMSynthesize ▪ LLMConfiguration
Service Connections: GoogleGemini ▪ OpenAI ▪ Anthropic