Skip to main content

Description

A Question Node is used to ask a question that requests specific information from the user. When the Node is triggered, the Entrypoint shifts to this Node so that the conversation continues only after the user answers. Also, a new Input object is generated. When a user input is received, it’s scored based on natural language understanding (NLU). If an attached Flow has an Intent that scores higher than Intents in the current Flow, the attached Flow is executed. The Intent scoring occurs before validation of the Question Node is completed. After the AI Agent asks a question and the user answers, the answer is validated according to its type. If it passes, the answer is valid and stored, and the conversation continues.
Question Nodes, by default, are triggered repeatedly until a valid answer is provided. To avoid this behavior, you can use an Optional Question or change the Intent Execution setting.

Parameters

Question Nodes have a selection of types that determine the validation used before a conversation continues.
TypeExpected user input to answer questionExample
TextAny text input.
Yes / NoA positive or negative response.
IntentOne of the trained Intents must be identified from the user’s response.
SlotA System Slot or Lexicon Slot must be detected within the user’s response. The slot is defined by name.
DateAny date (system-defined).
NumberAny number (system-defined).
TemperatureAny temperature (system-defined).
AgeAny age (system-defined).
DurationAny time duration (system-defined).
EmailAny email address (system-defined).
MoneyAny amount of money (system-defined). The input needs to include a number and a currency:
  • Number — if the number has decimals, they need to be separated by the respective separator of the conversation’s language. For example, separate decimals with a period for English and with a comma for German.
  • Currency — it is accepted as a symbol or written out, and can be written before or after the number.
1,300 dollars, 111.21 USD (English), USD 75, 43 $, $ 11, 300 euros, 300 Euro, 150,00 EUR (German), 150.00 EUR (English), EUR 28, 1900 €, € 200
URLAny reference/address to a resource on the Internet, for example, http://example.com.
PercentageAny percentage (system-defined).
RegexAny custom data format defined by a regex expression must be detected in the user’s response. The regular expression must start with / and end with /g. For example, /^1\d{​​​​7}​​​​$/g.
DataAny data (input.data) input.
xAppAny xApp input.
CustomAny input.
Pattern: License Plate (DE)A pattern for the German vehicle registration plate. This license plate is a unique alphanumeric identification tag displayed on a vehicle. It consists of letters, numbers, and sometimes special characters, for example, ö, ä, or ü. License plates serve as a means of identifying and registering vehicles, providing important information such as vehicle ownership, registration details, and compliance with legal requirements.M-345, x1Y2Z3, D 12345C
Pattern: IBANA pattern for the International Bank Account Number (IBAN).DE12345678901234567890
Pattern: Bank Identifier Code (BIC)A pattern for the Bank Identifier Code (BIC).DEUTDEFF500
Pattern: Social Security Number (US)A pattern for the US Social Security Number.123-45-6789
Pattern: IP Address (IPv4)A pattern for the IPv4 address.192.168.1.1
Pattern: Phone NumberA pattern for the phone number.+49 0000000000, 49 0000000000, +490000000000, (555) 000-000
Pattern: Credit CardA pattern for the bank card.4111111111111111
LLM-extracted EntityUtilizes a chosen LLM to extract entities, such as product codes, booking codes, and customer IDs, from a given string. Go to the LLM Entity Extraction Options.
All data formats supported by the Cognigy NLU for system slot mapping are listed on the Slot Mapping page.
Question Node output types carry the same functionality as the Say Node.
If you select the Date as a Question Type, the Question Node automatically renders a datepicker if the channel supports it. Refer to Datepicker for more information.
This section appears if you’ve selected the LLM-extracted Entity question type.Before using this Question type, set the Generative AI provider in the Settings. You can configure the Node to either use the default model defined in the Settings or choose a specific configured LLM.
ParameterTypeDescription
Large Language ModelSelectSelect a model or use the default one.
Entity NameCognigyScriptThe name of the entity to extract. For example, customerID.
Entity DescriptionCognigyScriptA sentence which describes the entity. For example, An alphanumeric string of 6 characters, e.g. ABC123 or 32G5FD.
Example InputTextExamples of text inputs. For example, My ID is AB54EE, is that ok?, That would be ah bee see double 4 three, I guess it's 49 A B 8 K.
Extracted EntityCognigyScriptExamples of extracted entities. For example, AB54EE, ABC443, 49AB8K.
Additional ValidationCognigyScriptUser input must meet this extra validation criteria, in addition to the built-in field validation, for example, Email, to be considered valid.
Alternatively, you can add input examples in the Use JSON Editor code field. For example:
{
  "My ID is AB54EE, is that ok?": "AB54EE",
  "That would be ah bee see double 4 three": "ABC443",
  "I guess it's 49 A B 8 K": "49AB8K"
}
ParameterTypeDescription
TemperatureIndicatorThe appropriate sampling temperature for the model. Higher values mean the model takes more risks.
TimeoutNumberThe maximum amount of milliseconds to wait for a response from the Generative AI Provider.
Reprompt messages are automatically triggered if the question is not answered correctly, such as when the expected type of input is not provided or a validation does not return true.Reprompt Methods
  • Simple Text
  • Channel Message
  • LLM Prompt
  • Execute Flow and Return
Outputs a simple text message to the user.
ParameterTypeDescription
Reprompt MessageCognigyScriptThe message to output if the given answer is invalid. For example, Not sure I understood this correctly.
Question results are always stored in input.result.If Store Result in Context is enabled, the Question Result is also stored in the Context object.If Store Result to Contact Profile is enabled, the Question Result is also stored in the Profile object.
Allows the conversation to break out of the Question Node if a specified Intent was found.
ActionDescription
Output MessageOutputs a message (equal to a Say Node).
Skip QuestionSkips the Question and enters a specific value into the input.result object.
Go To NodeGoes to a specific Flow Node and continues from there (equal to Go To Node).
Execute Flow and ReturnGoes to a specific Flow Node and returns to the question after (equal to Execute Flow Node).
Handover to Human AgentThe conversation is handed to a human agent, who can help you finish the question step and hand it back.
Add Intents that can trigger the “escalate on Intent” function by typing the Intent name into the “Valid Intents” field and pressing ENTER on your keyboard. Adjust the dedicated intent score threshold slider to the preferred setting so that the escalation only occurs if one of the listed intents reaches that score.
Allows the conversation to break out of the Question Node after a number of incorrect answers were provided.
ActionDescription
Output MessageOutputs a message (equal to a Say Node).
Skip QuestionSkips the Question and enters a specific value into the input.result object.
Go To NodeGoes to a specific Flow Node and continues from there (equal to Go To Node).
Execute Flow and ReturnGoes to a specific Flow Node and returns to the question after (equal to Execute Flow Node).
Handover to Human AgentThe conversation is handed to a human agent who can help you finish the question step and hand it back.
You can prevent reprompts when the escalation is happening.The option “only escalate once” determines if the escalation only happens once on the threshold or on every input form the threshold on.
As of Release v4.4.0, we added the option Handover to Human Agent. Open the Node Editor and you will find this option as an escalation action for Intents and Wrong_Answers that offers the ability to escalate questions by creating handovers to a real human agent.When this escalation is hit, the conversation is handed to a human agent who can then help you finish the question step and hand it back.
Allows for answers to be reconfirmed before continuing. This is especially useful when using voice agents and reconfirming what the agent understood (for example, in Number questions when the user said “my number is three double five triple nine five six eight”). The answer given to the reconfirmation question has to be a yes/no style answer and follows the same rules as a Yes/No Question.Reconfirmation Questions can contain a specific token ANSWER, which is replaced with a short form version of the given answer (for example, “3 EUR” in a Money question). The short form answer is taken from input.activeQuestion.tentativeShortFormAnswer;Reconfirmation Questions can have a specific re-prompt set, which is output before the question if the answer to the question is not of yes/no style.
Store detailed ResultsThis setting, when enabled, stores a more detailed JSON object under the result property of the input. This is useful in case more information is needed.Skip if Answer in InputWhen enabled, this setting skips the Question if the answer is already provided in the input text.Additional ValidationA CognigyScript condition which must return true in order for the answer to be considered valid. An example would be an additional validation on an Email Question of input.slots.EMAIL[0].endsWith("cognigy.com") which would guarantee that only cognigy.com email addresses pass the validation.Result LocationThe location of an answer is determined by default by the question type (for example, input.slots.EMAIL[0] for Email Questions). This can be overwritten using this setting (for example, input.slots.EMAIL would store all found email slots). If the result location doesn’t return a value (= is falsy), the answer is considered invalid.Forget Question ThresholdThis setting determines how long a user can have been “away” from the Node after the question was initially asked. With the default setting 1 this means that the question has to be answered on the next user input. If a user input comes back to the question at a later stage, it is treated as if the question was hit for the first time and the question is asked.
To use AI-enhanced output rephrasing, read the Generative AI article.
You can use various functions of the Text Cleaner class to preprocess the answer to a question before it is evaluated. This can be helpful, for example, when requesting a name using a text type question or when asking for a part number using a slot question.In addition to the Text Cleaner functions, users have the option to rerun NLU after the cleaning process. This approach allows for tasks such as re-detecting slots or properly filling any remaining slots.

Exclude from Transcript

Excludes the Node output from the conversation transcript. The output remains visible to the end user but isn’t stored in the transcript object or shared with the LLM provider. You can use this parameter to:
  • Hide sensitive or irrelevant data, such as legal disclaimers, so the model doesn’t see or repeat them.
  • Prevent the model from copying patterns (called in-context learning) you didn’t want it to learn.
By default, the model model repeats the question style it learned from the AI Agent’s earlier question, even though the end user asked for an answer, not a question.
AI Agent: What is your favorite color? (included in the conversation transcript)
End User: Blue.
Later:
End User: Tell me your favorite food.
AI Agent: What is your favorite food?
By excluding the AI Agent’s earlier question from the transcript, the same conversation looks like this:
AI Agent: What is your favorite color? (included in the conversation transcript)
End User: Blue.
Later:
End User: Tell me your favorite food.
AI Agent: I enjoy pizza.
Use this parameter to maintain confidentiality, for example, prevent sensitive data from reaching the LLM, or to display messages such as legal disclaimers or system notes that shouldn’t affect the AI Agent’s behavior.

Question Information in Input

When a question is active, meaning that the AI Agent is waiting for the answer, information regarding the question is added to the Input object.
"activeQuestion": {
    "nodeId": "18b158bf-71a3-4d4f-a31f-812b1810f8af",
    "type": "yesNo",
    "lastExecutedAt": 2,
    "forgetQuestionThreshold": 1,
    "repromptCount": 1,
    "escalationCount": 0
}
This information can be used to trigger specific actions on escalation or to jump back to the Question Node after an escalation.
Questions can be combined with Slot Fillers to create a “Missing Pattern”. This mechanism keeps asking the user for the missing information in a very natural way, until all questions have been answered.

1: Note that not all LLM models support streaming.
I