Introduction to Chat Notebooks
Creating and Using Chat Notebooks and Inputs | Comparing Prompt Functions and Prompt Modifiers |
Personas | Chat Conversation Management |
Prompt Modifiers | Chat Settings |
Function Prompts |
Chat Notebooks provide interactive chat-based access to large language models (LLMs), including the ability to offer natural language–based assistance in using the Wolfram Language.
Creating and Using Chat Notebooks and Inputs
To Create a New Chat Notebook
From the menu, choose File ▶ New ▶ Chat-Enabled Notebook or press Alt +N or Cmd+Option+N.
In a Chat Notebook, as in a standard notebook, the default cell type is a Wolfram Language input cell. Chat input cells can be created in several different ways.
You can also create a notebook in which the default new cell is a chat input cell by choosing File ▶ New ▶ Chat-Driven Notebook from the menu.
To Create a New Chat Input Cell
Type ' (single quote) in any new cell or choose ChatInput from the insert cell button in the notebook toolbar:
You can also click the input chooser on the cell insertion bar and choose Chat Input.
You can enable chat features in any notebook by using the chat settings button in the notebook toolbar or using CreateNotebook["Chat"] to create a new Chat Notebook.
Using Chat Inputs
To start a chat conversation, begin by creating a new chat input cell and typing your input. Chat input cells have a blue border and a chat icon on their left:
Before the cell has been evaluated, the icon on the left (the persona icon) changes to indicate the current persona. The default persona is Code Assistant (), which instructs the LLM to behave as a friendly helper with Wolfram Language code.
Chat input cells have two primary settings that govern how chat inputs are processed: the LLM model and the persona. Clicking the persona icon to the left of the chat input cell lets you see and change these settings.
To evaluate the chat input, place your cursor in the chat input cell and press Shift+Enter:
When you perform a chat evaluation, the contents of the chat input cell and Chat History are sent to the configured default LLM model. The response is written immediately into your notebook as it is being generated.
If you want to stop the chat evaluation before it is complete, you can click the stop icon () on the output cell.
Elements of Chat Outputs
Output from the LLM is formatted. Links are clickable, code has appropriate styling and is runnable, and Manipulates are interactive.
You can paste the output into the notebook and edit it like any other input. Hovering over the code in the output brings up three buttons:
The first will insert the code into a new input cell and run it.
The second copies the code to a new input cell, but does not evaluate it.
The third button copies the code to the clipboard.
Sometimes, as part of responding, the LLM will invoke an LLMTool to help it. Tools are a modular way to extend the capabilities of the LLM, ranging from performing a specific predefined action, such as looking up documentation or evaluating code, to performing another computational task.
If the LLM uses a tool, it will mention it in the output. You can click the arrow to see what was sent to the LLM:
Prompts
There are three special input types: personas, modifiers and functions.
- Persona prompts define chat interactions
- Prompt modifiers modify the output coming from the LLM
- Function prompts generate output from existing text or other input
Personas
Personas are built-in modifications to chat inputs. By adding additional instructions for the LLM, they can change the tone of the output—for example, by asking it to focus on generating code instead of text, or providing simpler or shorter output.
To Invoke a Persona
Select a persona from the chat settings button in the notebook toolbar to establish it as the default for that notebook. To direct a single chat at a persona, type @persona at the beginning of the chat input cell.
When you direct chat a persona in a particular chat cell, that persona gets sent the whole previous history. But after that persona has responded, subsequent chat cells revert to using the current default persona.
You can use the Chat Notebook Settings menu to add personas from the Wolfram Prompt Repository or another source.
To Use a Persona from the Prompt Repository
From the Add & Manage Personas dialog, find the persona you want in the Prompt Repository and click the install button in the upper right. Then simply use @persona in a chat input cell:
To Use a Persona from Another Source
From the Add & Manage Personas dialog, enter the URL for a valid Prompt Definition notebook for the persona you want to install.
Prompt Modifiers
Prompt modifiers give the LLM instructions about what kind of output to return.
To Use a Prompt Modifier
Add one or more prompt modifiers (beginning with #) to the end of your input:
Some modifier prompts take parameters, which are added after a |. Click the #modifier box and add "| parameter":
To Find a Prompt Modifier
Find the modifier you want in the Prompt Repository and simply use it in a chat input cell.
Chat input cells will also autosuggest prompts as you begin to type:
Prompts can be used programmatically with LLMPrompt or LLMResourceFunction.
Function Prompts
Function prompts take an input and perform operations to generate output. Examples include code and text generation and editing.
To Use a Function Prompt
Type "!prompt" at the beginning of your input:
To Specify the Input
By default, a function prompt uses the content of the chat input as its argument, but you can refer to previous notebook content instead.
> — use the text in the chat input cell as an argument (default)
^ — use the preceding cell in a chat notebook as an argument
^^ — use all preceding content in a chat block as an argument
Type the indicator after the pipe:
To Find a Function Prompt
Find the function you want in the Prompt Repository and simply use it in a chat input cell.
Chat input cells will also autosuggest prompts as you begin to type.
You can use function prompts programmatically using LLMResourceFunction.
Comparing Prompt Functions and Prompt Modifiers
Sometimes you have a choice between using a function prompt or a prompt modifier that seem similar. Since function prompts work on the given input, and prompt modifiers modify the response from the LLM, they provide different results.
For example, "!Translate" translates the text you give it, and "#Translated" asks a question and then translates the answer:
Chat Conversation Management
Unlike in other Wolfram Notebooks, in Chat Notebooks the history is physical rather than temporal. Chat cells are "aware" of the cells above them. That means when working with chat inputs, the order of cells matters.
Chat History
The chat history sent to the LLM includes all cell types, not only chat inputs and outputs. That means you can write notes in a text cell or code in a Wolfram Language input cell and ask the LLM about it:
When necessary, more information is added to the history. For example, if a stack trace is generated, it will be included with message cells.
The Wolfram System automatically trims the content sent to the LLM. For example, large outputs are summarized. In a long notebook, content will sometimes be dropped from the top of the notebook.
You can limit the amount of chat history being sent to the LLM by using chat blocks.
Creating Chat Blocks
A chat block is the set of cells beginning with the current chat input and extending upward to the nearest chat delimiter cell or the top of the notebook.
Chat inputs in a chat block are independent of notebook content outside of the block. This provides a convenient way to switch between different personas or LLM models. Using chat blocks in a long notebook can also help you avoid using too many tokens by reducing the amount of chat history sent to the LLM.
Begin a new chat block by pressing ~ or choosing ChatDelimiter from the Insert Cell menu.
This creates a gray separator. Chat inputs after the separator are independent of notebook content before the separator:
Other Delimiters
A chat block divider acts as both a chat delimiter and a section heading. Normal section styles, such as Section or Subsection, do not act as chat delimiters.
If you only want a single cell that is separate from the chats before and after it, you can create a side chat cell by typing an extra quote mark (' '):
Revising Outputs
When a chat input is reevaluated, the previous output is retained. The left-right arrows let you see the older results:
If the LLM makes a mistake, you can modify the output to teach it. Subsequent chats in the same chat block will "see" the revised output, so it will be less likely to make that mistake again in the same chat block.
Chat Settings
You can set settings at several different levels: cell, section, notebook and global. Many of the settings are the same, but the scope of what they affect changes.
Chat input cells inherit the settings of their section, and sections inherit notebook settings.
All levels include settings to select from the available personas and LLM models and to adjust the temperature (which affects the randomness of the responses).
The section-level menu is the same as the cell level menu, with the addition of Chat Block Settings.
The notebook-level menu adds a checkbox to enable AI chat features (if they are off) and a checkbox to enable automatic result analysis, which assists with coding tasks, identifies errors and makes suggestions. If enabled, all output and chat history are automatically sent to the LLM.
Add & Manage Personas provides a dialog to control which installed personas appear in the menus, to remove personas and to install new personas from the Prompt Repository.
Add & Manage Tools provides a dialog to install tools from the LLM Tool Repository or another location and enable them for specific personas.
The global AI Settings dialog allows setting the default persona, model and temperature and shows a list of installed personas.