
Description
This Node is in legacy mode, which means it is deprecated. Use a new version of this Node to get more flexibility when building AI Agents.
- Chat. This mode is activated by default and is preferable for dynamic conversations and interactions with the model. It takes into account the context of messages from the user and the agent, depending on the chosen number of transcript turns (messages) in the Transcript Turns setting.
- Prompt. This mode is preferable for single-turn tasks or generating text based on a single prompt.
Parameters
Large Language Model
The selected Default model is the model that you specified in Settings > Generative AI Settings of your Project. You can select a different model from the list or override the selected model using the Custom Model Options parameter.Instruction
This is either the prompt for completions or the system message for chat. Additionally, you can inject the recent conversation into the Instruction (System Message/Prompt) field by using these tags:-
@cognigyRecentConversation
— the tag is replaced with a string that can contain up to 10 recent AI Agent and 10 user outputs, for example: -
@cognigyRecentUserInputs
— the tag is replaced with a string that can contain up to 10 recent user outputs, for example:If you want to access only the last user input, specify theText
token in the Instruction (System Message/Prompt) field.
Chat Mode
Activate the toggle to enable Chat mode.Advanced
Advanced
Parameter | Type | Description |
---|---|---|
Sampling Method | Select | Methods:
|
Maximal Tokens | Indicator | The maximum number of tokens to generate in the completion. |
Presence Penalty | Indicator | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood of talking about new topics. |
Frequency Penalty | Indicator | Number between -2.0 and 2.0. The penalty assigns a lower probability to tokens frequently appearing in the generated text, encouraging the model to generate more diverse and unique content. |
Use Stops | Toggle | Whether to use a list of stop words to let Generative AI know where the sentence stops. |
Stops | Text | Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
Timeout | Number | The maximum number of milliseconds to wait for a response from the Generative AI Provider. |
Response Format | Select | Choose the format for the model’s output result. You can select one of the following options:
|
Seed | Number | Use this parameter for consistent output when referring to the same LLM Prompt Node multiple times. Specify any integer number, for example, 123 . The number in the Seed field and the prompt in the Instruction (System Message/Prompt) field should remain unchanged during subsequent references to the Node.Note that in OpenAI, this parameter is in Beta and is supported only by certain models. |
Storage & Streaming Options
Storage & Streaming Options
Parameter | Type | Description |
---|---|---|
How to handle the result | Select | Determine how to handle the prompt result:
|
Input Key to store Result | CognigyScript | The parameter appears when Store in Input is selected. The result is stored in the promptResult Input object by default. You can specify another key. |
Context Key to store Result | CognigyScript | The parameter appears when Store in Context is selected. The result is stored in the promptResult Context object by default. You can specify another key. |
Stream Buffer Flush Tokens | Text Array | The parameter appears when Stream to Output is selected. It defines tokens that trigger the stream buffer to flush to the output. The tokens can be punctuation marks or symbols, such as \n . |
Stream Buffer Flush Overrides | Text Array | The parameter appears when Stream to Output is selected. It allows using regular expressions (without leading or trailing slashes) to control stream buffer flushing. A trailing $ is automatically added to match patterns at the end of the buffer. For example, \d+\. checks for a number followed by a dot at the end of the string. |
Output result immediately | Toggle | The parameter appears when you select either Store in Input or Store in Context. This parameter allows you to output results immediately without using the Say Node and LLM Prompt token. |
Store Detailed Results | Toggle | The parameter appears when you select either Store in Input or Store in Context, or when you enable Store Copy in Input. This parameter allows you to save detailed results of the LLM’s generated output. By default, the result is stored in the promptResult object. You can specify another value in the Context Key for the Result field to save it in the Context object, or in the Input Key for the Result to save it in the Input object. The object contains keys such as result , finishReason , and usage . It may also include detailedResult if completion models are used, as well as firstChunk and lastChunk in some streaming results, depending on the LLM provider. |
Store Copy in Input | Toggle | The parameter appears when Stream to Output is selected. In addition to streaming the result to the output, store a copy in the Input object by specifying a value in the Input Key to store Result field. |
Input Key to store Result | CognigyScript | The parameter appears when Store Copy in Input is selected. The result is stored in the promptResult Input object by default. You can specify another key. |
.
, !
, ?
, or any other symbols that act as delimiters for complete logical statements.
When Cognigy.AI detects one of these tokens, it promptly flushes the token buffer into the voice or text chat.The preconfigured overrides are listed in the table.Regex | Description | Example |
---|---|---|
\d+\. | A number followed by a dot. | 26.08 |
\b(?:Dr|Ms|Mr|Mrs|Prof|Sr|Jr|ca)\. | Common abbreviations followed by a dot. | Mr. |
\b[A-Z]\. | A single capital letter followed by a dot. | M. Smith |
\.\.\. | Three dots used for omission. | … |
\b.\..\. | Two-letter abbreviations. | i.e., e.g. |
Error Handling
Error Handling
Parameter | Type | Description |
---|---|---|
Log to System Logs | Toggle | Log errors to the system logs. They can be viewed on the Logs page of your Project. The parameter is inactive by default. |
Select Error Handling Approach | Select | You can select one of the Error Handling options:
|
Error Message (optional) | Text | The parameter appears when Continue Flow Execution is selected. Add an message to output if the LLM Prompt Node fails. |
Select Flow | Select | The parameter appears when Go to Node is selected. Select a Flow from the available options. |
Select Node | Select | The parameter appears when Go to Node is selected. Select a Node from the available options. |
Custom Options
Custom Options
These settings are helpful if you need to use parameters that are not included in the LLM Prompt Node or if you need to overwrite existing ones.
Forcing Model VersionsYou can force the LLM Prompt Node to use a specific model version by including it in the Custom Model Options.
This means that the LLM Prompt Node will use the specified version of the language model instead of the default or any other available versions. This allows for more control over the behavior of the LLM Prompt Node, ensuring it utilizes the desired model version for generating prompts or responses.You can use models from any LLM provider supported by Cognigy, including those not yet directly integrated.
However, you can only replace one model with another within the same provider.Let’s consider an example with the Anthropic provider:
how to force the LLM Prompt Node to use the model version
Parameter | Type | Description |
---|---|---|
Custom Model Options | JSON | Additional parameters for the LLM model. You can specify individual parameters as well as entire functions. These parameters customize the behavior of the model, such as adjusting temperature, top_k, or presence_penalty. Note that if you use a parameter already set in the Node, for example, temperature, it will be overwritten. To view the full list of available parameters for your model, refer to the LLM provider’s API documentation. For example, OpenAI or Azure OpenAI. Examples:
|
Custom Request Options | JSON | Additional parameters for the LLM request. These parameters customize the request itself, such as setting parameters related to timeout, retries, or headers. For more information, refer to the LLM provider’s API documentation. Examples: - { "timeout": 5000 } - { "headers": { "Authorization": "Bearer <token>" } } |
claude-3-sonnet-20240229
,
despite the LLM resource defaulting to the claude-3-5-sonnet-20240620
model:- Create an Anthropic LLM resource for Claude-1, for example,
claude-3-5-sonnet-20240620
. - Create a Flow and add an LLM Prompt Node to it.
- In the LLM Prompt Node, select the model
claude-3-5-sonnet-20240620
from the Large Language Model list. - Override the model selection. In the Custom Model Options field, specify the custom model options as follows:
{ "model": "claude-3-sonnet-20240229" }
. - Click Save Node.
claude-3-sonnet-20240229
model.Below, you’ll find documentation for supported models:Debugging Settings
Debugging Settings
When using the Interaction Panel, you can trigger two types of debug logs. These logs are only available when using the Interaction Panel and are not intended for production debugging. You can also combine both log types.
Parameter | Type | Description |
---|---|---|
Show Token Count | Toggle | Send a debug message containing the input, output, and total token count. The message appears in the Interaction Panel when debug mode is enabled. Cognigy.AI uses the GPT-3 tokenizer algorithm, so actual token usage may vary depending on the model used. The parameter is inactive by default. |
Log Request and Completion | Toggle | Send a debug message containing the request sent to the LLM provider and the subsequent completion. The message appears in the Interaction Panel when debug mode is enabled. The parameter is inactive by default. |
More Information
1: Note that not all LLM models support streaming.