
Description
A Question Node is used to ask a question that requests specific information from the user. When the Node is triggered, the Entrypoint shifts to this Node so that the conversation continues only after the user answers. Also, a new Input object is generated. When a user input is received, it’s scored based on natural language understanding (NLU). If an attached Flow has an Intent that scores higher than Intents in the current Flow, the attached Flow is executed. The Intent scoring occurs before validation of the Question Node is completed. After the AI Agent asks a question and the user answers, the answer is validated according to its type. If it passes, the answer is valid and stored, and the conversation continues.Question Nodes and Intent Execution
Question Nodes and Intent Execution
Question Nodes, by default, are triggered repeatedly until a valid answer is provided. To avoid this behavior, you can use an Optional Question or change the Intent Execution setting.
Parameters
Question Types
Question Types
Question Nodes have a selection of types that determine the validation used before a conversation continues.
Type | Expected user input to answer question | Example |
---|---|---|
Text | Any text input. | |
Yes / No | A positive or negative response. | |
Intent | One of the trained Intents must be identified from the user’s response. | |
Slot | A System Slot or Lexicon Slot must be detected within the user’s response. The slot is defined by name. | |
Date | Any date (system-defined). | |
Number | Any number (system-defined). | |
Temperature | Any temperature (system-defined). | |
Age | Any age (system-defined). | |
Duration | Any time duration (system-defined). | |
Any email address (system-defined). | ||
Money | Any amount of money (system-defined). The input needs to include a number and a currency:
| 1,300 dollars , 111.21 USD (English), USD 75 , 43 $ , $ 11 , 300 euros , 300 Euro , 150,00 EUR (German), 150.00 EUR (English), EUR 28 , 1900 € , € 200 |
URL | Any reference/address to a resource on the Internet, for example, http://example.com . | |
Percentage | Any percentage (system-defined). | |
Regex | Any custom data format defined by a regex expression must be detected in the user’s response. The regular expression must start with / and end with /g . For example, /^1\d{7}$/g . | |
Data | Any data (input.data) input. | |
xApp | Any xApp input. | |
Custom | Any input. | |
Pattern: License Plate (DE) | A pattern for the German vehicle registration plate. This license plate is a unique alphanumeric identification tag displayed on a vehicle. It consists of letters, numbers, and sometimes special characters, for example, ö , ä , or ü . License plates serve as a means of identifying and registering vehicles, providing important information such as vehicle ownership, registration details, and compliance with legal requirements. | M-345 , x1Y2Z3 , D 12345C |
Pattern: IBAN | A pattern for the International Bank Account Number (IBAN). | DE12345678901234567890 |
Pattern: Bank Identifier Code (BIC) | A pattern for the Bank Identifier Code (BIC). | DEUTDEFF500 |
Pattern: Social Security Number (US) | A pattern for the US Social Security Number. | 123-45-6789 |
Pattern: IP Address (IPv4) | A pattern for the IPv4 address. | 192.168.1.1 |
Pattern: Phone Number | A pattern for the phone number. | +49 0000000000 , 49 0000000000 , +490000000000 , (555) 000-000 |
Pattern: Credit Card | A pattern for the bank card. | 4111111111111111 |
LLM-extracted Entity | Utilizes a chosen LLM to extract entities, such as product codes, booking codes, and customer IDs, from a given string. Go to the LLM Entity Extraction Options. |
All data formats supported by the Cognigy NLU for system slot mapping are listed on the Slot Mapping page.
Channels and Output Types
Channels and Output Types
Question Node output types carry the same functionality as the Say Node.
If you select the Date as a Question Type, the Question Node automatically renders a datepicker if the channel supports it. Refer to Datepicker for more information.
LLM Entity Extraction Options
LLM Entity Extraction Options
This section appears if you’ve selected the LLM-extracted Entity question type.Before using this Question type,
set the Generative AI provider in the Settings.
You can configure the Node to either use the default model defined in the Settings or choose a specific configured LLM.
Alternatively, you can add input examples in the Use JSON Editor code field. For example:
Parameter | Type | Description |
---|---|---|
Large Language Model | Select | Select a model or use the default one. |
Entity Name | CognigyScript | The name of the entity to extract. For example, customerID . |
Entity Description | CognigyScript | A sentence which describes the entity. For example, An alphanumeric string of 6 characters, e.g. ABC123 or 32G5FD . |
Example Input | Text | Examples of text inputs. For example, My ID is AB54EE, is that ok? , That would be ah bee see double 4 three , I guess it's 49 A B 8 K . |
Extracted Entity | CognigyScript | Examples of extracted entities. For example, AB54EE , ABC443 , 49AB8K . |
Additional Validation | CognigyScript | User input must meet this extra validation criteria, in addition to the built-in field validation, for example, Email, to be considered valid. |
Advanced
Advanced
Parameter | Type | Description |
---|---|---|
Temperature | Indicator | The appropriate sampling temperature for the model. Higher values mean the model takes more risks. |
Timeout | Number | The maximum amount of milliseconds to wait for a response from the Generative AI Provider. |
Reprompt Options
Reprompt Options
Reprompt messages are automatically triggered if the question is not answered correctly,
such as when the expected type of input is not provided or a validation does not return
true
.Reprompt Methods- Simple Text
- Channel Message
- LLM Prompt
- Execute Flow and Return
Outputs a simple text message to the user.
Parameter | Type | Description |
---|---|---|
Reprompt Message | CognigyScript | The message to output if the given answer is invalid. For example, Not sure I understood this correctly . |
Result Storage
Result Storage
Escalation - Intents
Escalation - Intents
Allows the conversation to break out of the Question Node if a specified Intent was found.
Add Intents that can trigger the “escalate on Intent” function by typing the Intent name into the “Valid Intents” field and pressing ENTER on your keyboard. Adjust the dedicated intent score threshold slider to the preferred setting so that the escalation only occurs if one of the listed intents reaches that score.
Action | Description |
---|---|
Output Message | Outputs a message (equal to a Say Node). |
Skip Question | Skips the Question and enters a specific value into the input.result object. |
Go To Node | Goes to a specific Flow Node and continues from there (equal to Go To Node). |
Execute Flow and Return | Goes to a specific Flow Node and returns to the question after (equal to Execute Flow Node). |
Handover to Human Agent | The conversation is handed to a human agent, who can help you finish the question step and hand it back. |
Escalation on Wrong Answers
Escalation on Wrong Answers
Allows the conversation to break out of the Question Node after a number of incorrect answers were provided.
You can prevent reprompts when the escalation is happening.The option “only escalate once” determines if the escalation only happens once on the threshold or on every input form the threshold on.
Action | Description |
---|---|
Output Message | Outputs a message (equal to a Say Node). |
Skip Question | Skips the Question and enters a specific value into the input.result object. |
Go To Node | Goes to a specific Flow Node and continues from there (equal to Go To Node). |
Execute Flow and Return | Goes to a specific Flow Node and returns to the question after (equal to Execute Flow Node). |
Handover to Human Agent | The conversation is handed to a human agent who can help you finish the question step and hand it back. |
Handover to Human Agent
Handover to Human Agent
As of Release v4.4.0, we added the option Handover to Human Agent. Open the Node Editor and you will find this option as an escalation action for Intents and Wrong_Answers that offers the ability to escalate questions by creating handovers to a real human agent.When this escalation is hit, the conversation is handed to a human agent who can then help you finish the question step and hand it back.
Reconfirmation Settings
Reconfirmation Settings
Allows for answers to be reconfirmed before continuing.
This is especially useful when using voice agents and reconfirming what the agent understood
(for example, in Number questions when the user said “my number is three double five triple nine five six eight”).
The answer given to the reconfirmation question has
to be a yes/no style answer and follows the same rules as a Yes/No Question.Reconfirmation Questions can contain a specific token
ANSWER
, which is replaced with a short form version of the given answer (for example, “3 EUR” in a Money question). The short form answer is taken from input.activeQuestion.tentativeShortFormAnswer
;Reconfirmation Questions can have a specific re-prompt set, which is output before the question if the answer to the question is not of yes/no style.Advanced
Advanced
Store detailed ResultsThis setting, when enabled, stores a more detailed JSON object under the result property of the input. This is useful in case more information is needed.Skip if Answer in InputWhen enabled, this setting skips the Question if the answer is already provided in the input text.Additional ValidationA CognigyScript condition which must return
true
in order for the answer to be considered valid. An example would be an additional validation on an Email Question of input.slots.EMAIL[0].endsWith("cognigy.com")
which would guarantee that only cognigy.com email addresses pass the validation.Result LocationThe location of an answer is determined by default by the question type (for example, input.slots.EMAIL[0]
for Email Questions). This can be overwritten using this setting (for example, input.slots.EMAIL
would store all found email slots). If the result location doesn’t return a value (= is falsy), the answer is considered invalid.Forget Question ThresholdThis setting determines how long a user can have been “away” from the Node after the question was initially asked. With the default setting 1
this means that the question has to be answered on the next user input. If a user input comes back to the question at a later stage, it is treated as if the question was hit for the first time and the question is asked.AI-Enhanced Output
AI-Enhanced Output
To use AI-enhanced output rephrasing, read the Generative AI article.
Answer Preprocessing
Answer Preprocessing
You can use various functions of the Text Cleaner class to preprocess the answer to a question before it is evaluated. This can be helpful, for example, when requesting a name using a
text
type question or when asking for a part number using a slot
question.In addition to the Text Cleaner functions, users have the option to rerun NLU after the cleaning process. This approach allows for tasks such as re-detecting slots or properly filling any remaining slots.Exclude from Transcript
Excludes the Node output from the conversation transcript. The output remains visible to the end user but isn’t stored in thetranscript
object or shared with the LLM provider.
You can use this parameter to:
- Hide sensitive or irrelevant data, such as legal disclaimers, so the model doesn’t see or repeat them.
- Prevent the model from copying patterns (called in-context learning) you didn’t want it to learn.
Example
Example
By default, the model model repeats the question style it learned from the AI Agent’s earlier question, even though the end user asked for an answer, not a question.By excluding the AI Agent’s earlier question from the transcript, the same conversation looks like this:Use this parameter to maintain confidentiality, for example, prevent sensitive data from reaching the LLM, or to display messages such as legal disclaimers or system notes that shouldn’t affect the AI Agent’s behavior.
Question Information in Input
When a question is active, meaning that the AI Agent is waiting for the answer, information regarding the question is added to the Input object.Slot Fillers
Slot Fillers
Questions can be combined with Slot Fillers to create a “Missing Pattern”. This mechanism keeps asking the user for the missing information in a very natural way, until all questions have been answered.
1: Note that not all LLM models support streaming.