
Description
The LLM Prompt Node lets you configure the system prompt for your LLM calls, generating both text and structured content. With this Node, you can also:- Include tools in your LLM calls.
- Enable image handling.
- Set advanced LLM request options.
LLM Prompt Settings
Large Language Model
Large Language Model
System Prompt
System Prompt
-
@cognigyRecentConversation
— the tag is replaced with a string that can contain up to 10 recent AI Agent and 10 user outputs, for example: -
@cognigyRecentUserInputs
— the tag is replaced with a string that can contain up to 10 recent user outputs, for example:
Text
token in the System Prompt field.When adding a tag, ensure that you leave a line break before and after the tag, for example:Advanced
Advanced
Parameter | Type | Description |
---|---|---|
Maximal Tokens | Slider | The maximum number of tokens to generate in the completion. |
Use Single Prompt Mode | Toggle | Send a single prompt to the model without any conversation context. This parameter is disabled by default. It doesn’t support multi-turn conversations or chat and is useful for simple, one-off completions. |
Transcript Turns | Slider | The number of conversation turns to include in the LLM chat completion request. By default, the value is 50 . |
Response Format | Select | Choose the format for the model’s output result. You can select one of the following options:
|
Timeout | Number | The maximum number of milliseconds to wait for a response from the Generative AI Provider. |
Sampling Method | Select | Methods:
|
Temperature | Slider | Define the sampling temperature, which ranges between 0 and 1. Higher values, such as 0.8, make the output more random, while lower values, such as 0.2, make it more focused and deterministic. |
Top Percentage | Slider | Control the Top-p (nucleus) sampling, ranging from 0 to 1. Higher values allow more diverse word choices, while lower values make the output more focused. For example, 0.9 means the model selects from the smallest set of words with a combined probability of 90%. |
Presence Penalty | Slider | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood of talking about new topics. |
Frequency Penalty | Slider | Number between -2.0 and 2.0. The penalty assigns a lower probability to tokens frequently appearing in the generated text, encouraging the model to generate more diverse and unique content. |
Use Stops | Toggle | Whether to use a list of stop words to let Generative AI know where the sentence stops. |
Stops | Text | Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
Seed | Number | Use this parameter for consistent output when referring to the same LLM Prompt Node multiple times. Specify any integer number, for example, 123 . The number in the Seed field and the prompt in the Instruction (System Message/Prompt) field should remain unchanged during subsequent references to the Node.Note that in OpenAI, this parameter is in Beta and is supported only by certain models. |
Include Rich Media Context | Toggle | Controls whether context is added to the prompt. In this case, context refers to text extracted from rich media such as Text with Buttons, Quick Replies, and other types. This text provides AI Agents with additional information, improving their responses. If the Textual Description parameter in the Say, Question, or Optional Question Node is filled, the context is taken only from this parameter. If the Textual Description parameter is empty, the context is taken from the button titles and alt text in the rich media. By default, the Include Rich Media Context parameter is active. When this parameter is inactive, no context is added. Examples:
|
Storage & Streaming Options
Storage & Streaming Options
Parameter | Type | Description |
---|---|---|
How to handle the result | Select | Determine how to handle the prompt result:
|
Input Key to store Result | CognigyScript | The parameter appears when Store in Input is selected. The result is stored in the promptResult Input object by default. You can specify another key. |
Context Key to store Result | CognigyScript | The parameter appears when Store in Context is selected. The result is stored in the promptResult Context object by default. You can specify another key. |
Stream Buffer Flush Tokens | Text Array | The parameter appears when Stream to Output is selected. It defines tokens that trigger the stream buffer to flush to the output. The tokens can be punctuation marks or symbols, such as \n . |
Stream Buffer Flush Overrides | Text Array | The parameter appears when Stream to Output is selected. It allows using regular expressions (without leading or trailing slashes) to control stream buffer flushing. A trailing $ is automatically added to match patterns at the end of the buffer. For example, \d+\. checks for a number followed by a dot at the end of the string. |
Output result immediately | Toggle | The parameter appears when you select either Store in Input or Store in Context. This parameter allows you to output results immediately without using the Say Node and LLM Prompt token. |
Store Detailed Results | Toggle | The parameter appears when you select either Store in Input or Store in Context, or when you enable Store Copy in Input. This parameter allows you to save detailed results of the LLM’s generated output. By default, the result is stored in the promptResult object. You can specify another value in the Context Key for the Result field to save it in the Context object, or in the Input Key for the Result to save it in the Input object. The object contains keys such as result , finishReason , and usage . It may also include detailedResult if completion models are used, as well as firstChunk and lastChunk in some streaming results, depending on the LLM provider. |
Store Copy in Input | Toggle | The parameter appears when Stream to Output is selected. In addition to streaming the result to the output, store a copy in the Input object by specifying a value in the Input Key to store Result field. |
.
, !
, ?
, or any other symbols that act as delimiters for complete logical statements.
When Cognigy.AI detects one of these tokens, it promptly flushes the token buffer into the voice or text chat.The preconfigured overrides are listed in the table.Regex | Description | Example |
---|---|---|
\d+\. | A number followed by a dot. | 26.08 |
\b(?:Dr|Ms|Mr|Mrs|Prof|Sr|Jr|ca)\. | Common abbreviations followed by a dot. | Mr. |
\b[A-Z]\. | A single capital letter followed by a dot. | M. Smith |
\.\.\. | Three dots used for omission. | ... |
\b.\..\. | Two-letter abbreviations. | i.e. , e.g. |
Tool Settings
Tool Settings
Parameter | Type | Description |
---|---|---|
Tool Choice | Selector | If supported by your LLM Model, this will determine how tools should be selected by the AI Agent:
|
Use Strict mode | Toggle | When the parameter is enabled, strict mode (if supported by the LLM provider) ensures that the arguments passed to a tool call precisely match the expected parameters. Enabling this feature can help prevent errors. However, it may cause a slight delay in the response, especially during the first call after making changes. |
Image Handling
Image Handling
Parameter | Type | Description |
---|---|---|
Process Images | Toggle | Enable the AI Agent to read and understand images |
attachments. Make sure that your LLM provider supports image processing; refer | ||
to your provider’s documentation. In addition, make sure that attachments are | ||
supported by and activated in your Endpoint, for example, Webchat. | Images | |
in Transcript | Selector | Configure how images older than the last turn are |
handled to reduce token usage: |
- Minify — reduces the size of these images to 512x512px.
- Drop — excludes the images.
- Keep — sends the max size (this option consumes more tokens).
Error Handling
Error Handling
Parameter | Type | Description |
---|---|---|
Log to System Logs | Toggle | Log errors to the system logs. They can be viewed on the Logs page of your Project. The parameter is inactive by default. |
Select Error Handling Approach | Select | You can select one of the Error Handling options:
|
Error Message (optional) | Text | The parameter appears when Continue Flow Execution is selected. Add an message to output if the LLM Prompt Node fails. |
Select Flow | Select | The parameter appears when Go to Node is selected. Select a Flow from the available options. |
Select Node | Select | The parameter appears when Go to Node is selected. Select a Node from the available options. |
Custom Options
Custom Options
Parameter | Type | Description |
---|---|---|
Custom Model Options | JSON | Additional parameters for the LLM model. You can specify individual parameters as well as entire functions. These parameters customize the behavior of the model, such as adjusting temperature, top_k, or presence_penalty. Note that if you use a parameter already set in the Node, for example, temperature, it will be overwritten. To view the full list of available parameters for your model, refer to the LLM provider’s API documentation. For example, OpenAI or Azure OpenAI. Examples:
|
Custom Request Options | JSON | Additional parameters for the LLM request. These parameters customize the request itself, such as setting parameters related to timeout, retries, or headers. For more information, refer to the LLM provider’s API documentation. Examples: - { "timeout": 5000 } - { "headers": { "Authorization": "Bearer <token>" } } |
claude-3-sonnet-20240229
,
despite the LLM resource defaulting to the claude-3-5-sonnet-20240620
model:- Create an Anthropic LLM resource for Claude-1, for example,
claude-3-5-sonnet-20240620
. - Create a Flow and add an LLM Prompt Node to it.
- In the LLM Prompt Node, select the model
claude-3-5-sonnet-20240620
from the Large Language Model list. - Override the model selection. In the Custom Model Options field, specify the custom model options as follows:
{ "model": "claude-3-sonnet-20240229" }
. - Click Save Node.
claude-3-sonnet-20240229
model.Below, you’ll find documentation for supported models:Debugging Settings
Debugging Settings
Parameter | Type | Description |
---|---|---|
Show Token Count | Toggle | Sends a debug message containing the input, output, and total token count. The message appears in the Interaction Panel when debug mode is enabled. Cognigy.AI uses the GPT-3 tokenizer algorithm, so actual token usage may vary depending on the model used. The parameter is inactive by default. |
Log System Prompt & Completion | Toggle | Sends a debug message containing the system prompt sent to the LLM provider and the subsequent completion. The message appears in the Interaction Panel when debug mode is enabled. The parameter is inactive by default. |
Log Tool Definitions | Toggle | Sends a debug message containing information about the configured AI Agent tools. The message appears in the Interaction Panel when debug mode is enabled. The parameter is inactive by default. |
Log LLM Latency | Toggle | Sends a debug message containing key latency metrics for the request to the model, including the time taken for the first output and the total time to complete the request. The message appears in the Interaction Panel when debug mode is enabled. The parameter is inactive by default. |
Send request logs to Webhook | Toggle | Sends the request sent to the LLM provider and the subsequent completion to a webhook service, including metadata, the request body, and custom logging data. With this parameter, you can use a webhook service to view detailed logs of the request to the LLM. The parameter is inactive by default. |
Webhook URL | CognigyScript | Sets the URL of the webhook service to send the request logs to. |
Custom Logging Data | CognigyScript | Sets custom data to send with the request to the webhook service. |
Condition for Webhook Logging | CognigyScript | Sets the condition for the webhook logging. |
Webhook Headers | Input fields | Sets the headers to send with the request to the webhook service. Use the Key and Value fields to enter a header. The Value field supports CognigyScript. After entering the header key, new empty Key and Value fields are automatically added, in case you need to add more headers. Alternatively, you can click Show JSON Editor and add input examples in the code field. |
AI Agent Tool Settings
Tools are child Nodes of AI Agent Nodes. They define actions that can be taken by the AI Agent. If an AI Agent wants to execute the tool, the branch below the child Node is executed. At the end of a tool branch, it is advisable to use a Resolve Tool Action Node to return to the AI Agent. Clicking the Tool Node lets you define a tool, set its parameters, and allows debugging by enabling detailed messages about the tool’s execution.Tool
Tool
Parameter | Type | Description |
---|---|---|
Tool ID | CognigyScript | Provide a meaningful name as a Tool ID. This ID can contain only letters, numbers, underscores (_ ), or dashes (- ). For example, update_user-1 . |
Description | CognigyScript | Provide a detailed description of what the tool does, when it should be used, and its parameters. |
Parameters
Parameters
Parameter | Type | Description |
---|---|---|
Use Parameters | Toggle | Activate this toggle to add parameters in addition to the tool name and description. The AI Agent will collect all data it needs and call a Tool with these parameters filled as arguments. These values can be accessed directly in the input.aiAgent.toolArgs object. |
Name | Text | Specify the name of the parameter. The name should be clear and concise, and describe the purpose of the parameter. |
Type | Selector | Select a type of the parameter:
|
Description | CognigyScript | Explain what parameter means by providing a brief description of the parameter’s usage. |
Enum (optional) | Enum | Define a set of values that the parameter can accept. The enum restricts the input to one of the specified values, ensuring only valid options are chosen. The enum is only available for string-type parameters in the Graphical editor. For other types, use the JSON editor. May not be supported by all LLM providers. |
Add Parameter | Button | Add a new parameter. |
Debug Settings
Debug Settings
Parameter | Type | Description |
---|---|---|
Debug Message when called | Toggle | Enable the output of a debug message when the tool is called to provide detailed information about the tool call. |
Advanced
Advanced
Parameter | Type | Description |
---|---|---|
Condition | CognigyScript | The tool will be enabled only if the condition is evaluated as true. If false, the tool isn’t part of the AI Agent’s Tools within this execution. For example, when using the unlock_account tool, you can specify a condition like context.accountStatus === "locked" . This checks the value in the context, and if it is missing or different, the tool will not be enabled. |
AI Agent MCP Tool Settings
MCP Tool Nodes are child Nodes of AI Agent Nodes. The MCP Tool Nodes connect to a remote MCP server to load tools that the AI Agent can execute. If an AI Agent wants to execute one of the loaded tools, the branch below the MCP Tool Node is triggered. Clicking the MCP Tool Node lets you define the connection, filter loaded tools, and allows debugging by enabling detailed messages about the tool’s execution.MCP Tool
MCP Tool
Parameter | Type | Description |
---|---|---|
Name | CognigyScript | Provide a name for the MCP connection. This name helps you identify the source of the loaded tool. |
MCP Server SSE URL | CognigyScript | Provide the URL to an SSE (Server-Sent Events) endpoint from a remote MCP server. Ensure that you connect only to trusted MCP servers. |
Timeout | Slider | Set the timeout time for the MCP connection in seconds. |
Debug Settings
Debug Settings
Parameter | Type | Description |
---|---|---|
Debug loaded Tools | Toggle | Enable this parameter to display a debug message with all tools loaded from the MCP server. The debug message also includes tools that have been filtered out in the Advanced section. |
Debug with Parameters | Toggle | Enable this parameter to include the Tool Parameters in the debug message. |
Debug calling Tool | Toggle | Enable the output of a debug message when the tool is called to provide detailed information about the tool call. |
Advanced
Advanced
Parameter | Type | Description |
---|---|---|
Cache Tools | Toggle | Disables caching of loaded tools while developing. Ensure that caching is enabled in production for performance reasons. The caching time is 10 minutes. |
Condition | CognigyScript | Sets the condition under which the tool will be activated. If the condition is evaluated as false, the tool isn’t part of the AI Agent’s Tools during execution. For example, when using the unlock_account tool, you can specify a condition like context.accountStatus === "locked" . This checks the value in the context, and if it is missing or different, the tool will not be enabled. |
Tool Filter | Select | Controls if tools should be excluded from execution. You can select one of the following options:
|
Blacklist | CognigyScript | This parameter appears if you select Blacklist in Tool Filter. Specify the tools that should be blocked from execution. Specify only one tool per field. |
Whitelist | CognigyScript | This parameter appears if you select Whitelist in Tool Filter. Specify the tools you want to allow for execution. Specify only one tool per field. |
Custom Headers | Sets custom authentication headers to send with the request to the MCP server. Use the Key and Value fields to enter a header. The Value field supports CognigyScript. After entering the header key, new empty Key and Value fields are automatically added, in case you need to add more headers. Alternatively, you can click Show JSON Editor and enter the headers in the code field. |
Call MCP Tool Settings
In the Flow editor, when you add an MCP Tool Node, a Call MCP Tool Node is automatically created below it. These two Nodes work together to define and execute the chosen tool. The Call MCP Tool Node sets the actual execution point of the chosen tool. This way, you can verify or modify teh tool call arguments in theinput.aiAgent.toolArgs
object, or add a Say Node before the tool call. When the Call MCP Tool Node is executed, the tool call is sent to the remote MCP server, where the Tool is executed remotely with any arguments set by the AI Agent.
To return the tool result to the AI Agent, the Resolve Immediately setting can be enabled to send the full result returned from the remote MCP server to the AI Agent.
As an alternative, use a Resolve Tool Action Node to return a specific result to the AI Agent.
Call MCP Tool
Call MCP Tool
Parameter | Type | Description |
---|---|---|
Resolve Immediately | Toggle | Enable this parameter to immediately resolve the tool action with the full result as the tool answer. |
Storage Options
Storage Options
Parameter | Type | Description |
---|---|---|
How to handle the result | Select | Determine how to handle the MCP tool call result:
|
Input Key to store Result | CognigyScript | The parameter appears when Store in Input is selected. The result is stored in the input.aiAgent.toolResult object by default. You can specify another value, but the MCP Tool Result Token won’t work if the value is changed. |
Context Key to store Result | CognigyScript | The parameter appears when Store in Context is selected. The result is stored in the context.aiAgent.toolResult object by default. |
Debug Settings
Debug Settings
Parameter | Type | Description |
---|---|---|
Debug Tool Result | Toggle | Enable the output of a debug message with the tool call result after a successful call. |
Examples
AI Agent Tool
In this example, theunlock_account
tool unlocks a user account by providing the email and specifying the reason for the unlocking.
Parameter configuration in JSON:
type
— the type for a tool parameter schema, which must always beobject
.properties
— defines the parameters for the tool configuration:email
— a required tool parameter for unlocking the account.type
— defines the data type for the tool parameter.description
— a brief explanation of what the property represents.
required
— listsemail
as a required parameter, ensuring that this value is always provided when the tool is called.additionalProperties
— ensures that the input contains only theemail
tool parameter, and no others are allowed.
AI Agent MCP Tool and Call MCP Tool
Use Zapier’s Remote MCP server
You can create a custom MCP server with personalized tools by using one of the provided SDKs. For a quicker setup, you can use a third-party provider. For example, Zapier allows you to configure your MCP server, which can be connected to multiple application APIs. To use Zapier as a remote MCP server, follow these steps:- Log in to your Zapier account, go to the MCP settings page, and configure your MCP server.
- Copy the SSE URL and paste it into the MCP Server SSE URL field of your MCP Tool Node.
- In the Zapier MCP settings, create an action to connect to various APIs. For example, you can create a Zapier action to automatically generate a Google Doc.
More Information
1: Note that not all LLM models support streaming.