LLM Prompt¶

Description¶
The LLM Prompt Node allows using Generative AI for creating relevant content. To do that, you need to add a text prompt that helps Generative AI continue the text.
Before using this Node, set the Generative AI provider in the Settings and select the appropriate model in the supported model list.
To display the output of the LLM Prompt Node to the user, follow these steps:
- In the Flow editor, add a Say Node below the LLM Prompt Node.
- In the Output Type field, select Text.
- In the Text field, click
and select the LLM Prompt Result Token.
- Click Save Node.
If you want the output result to be immediately displayed in the chat, without saving it in the Input or Context objects and utilizing the Say Node, select Stream to Output setting in the Storage & Streaming Options section.
Settings¶
Prompt¶
The prompt to generate completions for.
Additionally, you can inject the recent conversation into the Prompt field by using these tags:
Additional tags¶
You can inject the recent conversation in the Prompt field by using these tags:
@cognigyRecentConversation
— the tag is replaced with a string that can contain up to 10 recent virtual agent and 10 user outputs, for example:Agent: agentOutput1 User: userOutput1 Agent: agentOutput2 User: userOutput2
-
@cognigyRecentUserInputs
— the tag is replaced with a string that can contain up to 10 recent user outputs, for example:User: userOutput1 User: userOutput2
If you want to access only the last user input, specify Text
token in the Prompt field.
When adding a tag, ensure that you leave a line break before and after the tag, for example:
A user had a conversation with a chatbot. The conversation history so far is:
@cognigyRecentConversation
Describe the user sentiment in one very short line.
Advanced¶
Parameter | Type | Description |
---|---|---|
Sampling Method | Select | Methods: - Temperature — determines the level of randomness in the generated text. A higher temperature allows for more diverse and creative outputs, while a lower temperature leads to more predictable and consistent outputs with the training data. - Top Percentage — specifies the percentage of the most probable outputs for generation, resulting in more consistent output. |
Maximal Tokens | Indicator | The maximum number of tokens to generate in the completion. |
Presence Penalty | Indicator | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood of talking about new topics. |
Frequency Penalty | Indicator | Number between -2.0 and 2.0. The penalty assigns a lower probability to tokens frequently appearing in the generated text, encouraging the model to generate more diverse and unique content. |
Use Stops | Toggle | Whether to use a list of stop words to let Generative AI know where the sentence stops. |
Stops | Text | Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
Timeout | Number | The maximum amount of milliseconds to wait for a response from the Generative AI Provider. |
Storage & Streaming Options¶
Parameter | Type | Description |
---|---|---|
How to handle the result | Select | Determine how to handle the prompt result: - Store in Input — stores the result in the Input object. To print the prompt result, use the LLM Prompt Result Token in the Say Node. - Store in Context — stores the result in the Input object in the Context. To print the prompt result, use the LLM Prompt Result Token in the Say Node. - Stream to Output — streams the result directly into the output. This means that the model provides prompts directly into the conversation chat, and you don't need to use the LLM Prompt Result token and Say node. This result won't be stored in either the Input or the Context. Note that streaming may not be supported by all Cognigy LLM Prompt Node providers, such as Google1. If streaming is not supported, the result will be written to the Input object. |
Input Key to store Result | CognigyScript | The parameter is active when Store in Input selected. The result is stored in the promptResult Input object by default. You can specify another value. |
Context Key to store Result | CognigyScript | The parameter is active when Store in Context selected. The result is stored in the promptResult Context object by default. You can specify another value. |
Stream Output Tokens | CognigyScript | The parameter is active when Stream to Output is selected. Tokens after which to output the stream buffer. The tokens can be punctuation marks or symbols, such as \n . |
More information¶
-
The Stream to Output feature is supported by the
gpt-3.5-turbo
andtext-davinci-003
models from Microsoft Azure OpenAI and OpenAI, as well as the Antrophic modelsclaude-v1-100k
andclaude-instant-v1
. ↩