
Description
In release 4.69.0, the
order
property of the Knowledge Chunks in the topK
search result is deprecated when using the Search Extract Output Node.text-embedding-ada-002
model as well as the
text-davinci-003 (deprecated)
or gpt-3.5-turbo
models.
For more information, refer to the list of supported providers.
Parameters
Mode
There are three modes that offer different levels of data extraction and presentation:- Search & Extract & Output
- Search & Extract
- Search Only
Performs a search, extracts an answer, and outputs the result as text or adaptive card.In the Search & Extract & Output mode, the following steps are involved:
- The system performs a knowledge search and retrieves relevant information.
- The retrieved data is sent to a Language Model (LLM) for further processing.
- The LLM extracts key points or answers from the search results, filtering out unnecessary details.
- The extracted information is stored in the Input or Context objects.
- The system automatically outputs the result from the LLM as either plain text or an adaptive card.
LLM Prompt Node
, Answer Extraction
, and Knowledge Search
cases.Knowledge Store
Select a store from the list to allow the Node to retrieve the data.Search Settings
Search Settings
Parameter | Type | Description |
---|---|---|
Context-Aware Search | Select | Considering the transcript’s context for the search. It allows an AI Agent to answer follow-up questions. You can select one of the following options:
|
Transcript Steps | Slider | Number of prior transcript steps to use in a context-aware search. This parameter is active when the Context-Aware Search setting is enabled. One transcript step consists of a pair: one user input and one AI Agent output. For example, if you select a quantity of 2, the system considers the 2 most recent pairs of user input and AI Agent output. |
Where to store the result | Select | The storage for the search results. Select a storage:
|
String to search for | CognigyScript | The string to search for in the Knowledge Store. A default value is stored in input.text . |
Top K | Slider | Specify how many of the top K knowledge search results should be returned. |
Source Tags | CognigyScript | The tags serve to refine the scope of your knowledge search, allowing you to include only the most pertinent sections of the knowledge base and, as a result, improve the accuracy of search outputs. Before specifying tags, ensure that they were provided during the creation of the Knowledge Sources. Add Tags by specifying each Tag separately and pressing Enter. The maximum number of tags is 5. When you specify multiple Source Tags, the Search Extract Output Node defaults to an AND operator, meaning it only considers Sources that have all the specified Tags. This approach ensures the search results are precise and highly relevant to the end user’s query. To change this behavior, go to the Match Types for Source Tags parameter. |
Match type for Source Tags | Select | The operator to filter Knowledge Sources by Source Tags. Select one of the following options:
|
Source Tags
Source Tags
Extract Settings
Extract Settings
Parameter | Type | Description |
---|---|---|
Prompt | CognigyScript | The prompt for generating completions. This prompt is designed for ChatGPT-based models. The behavior of this prompt may vary depending on the model used. You can change the prompt at your discretion. In the prompt, the following tags are used to retrieve information from the Knowledge Store and Input object to ensure that the LLM receives all the necessary information to generate the most accurate response:
|
Temperature | Slider | The appropriate sampling temperature for the model. Higher values mean the model takes more risks. |
Maximal Tokens | Slider | The maximum number of tokens to generate in the completion. |
Presence Penalty | Slider | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood of talking about new topics. |
Frequency Penalty | Slider | Number between -2.0 and 2.0. The penalty assigns a lower probability to tokens frequently appearing in the generated text, encouraging the model to generate more diverse and unique content. |
Use Stops | Toggle | Whether to use a list of stop words to let Generative AI know where the sentence stops. |
Stops | Text | Up to 4 sequences where the API stops generating further tokens. The returned text doesn’t contain the stop sequence. |
Timeout | Number | The maximum amount of time in milliseconds to wait for a response from the Generative AI Provider. |
Timeout Message | CognigyScript | The message to output if the timeout is reached. |
Output Settings
Output Settings
Parameter | Type | Description |
---|---|---|
OutputMode | Select | This parameter allows you to choose between two options for the output:
|
Output Fallback Text | CognigyScript | The text to output if no answer was found. |
Error Handling
Error Handling
Parameter | Type | Description |
---|---|---|
Select Error Handling Approach | Select | You can select one of the Error Handling options:
|
Error Message (optional) | Text | Active when the Continue Flow Execution option is selected. Optional message to output if the Search or Extract service fails. |
Select Flow | Select | Active when the Go to Node option is selected. Select a Flow from the available options. |
Select Node | Select | Active when the Go to Node option is selected. Select a Node from the available options. |
Debugging Options
Parameter | Type | Description |
---|---|---|
Show Token Count | Toggle | Send a debug message containing the input, output, and total token count. The message appears in the Interaction Panel when Debug Mode is enabled. Cognigy.AI uses the GPT-3 tokenizer algorithm, so actual token usage may vary depending on the model used. The parameter is inactive by default. |
Log Request and Completion | Toggle | Send a debug message containing the request sent to the LLM provider and the subsequent completion. The message appears in the Interaction Panel when Debug Mode is enabled. The parameter is inactive by default. |
Examples
Context-Aware Search
By considering the context, the search can help users find information within a transcript more quickly, as it understands the nuances of the content. When enabling a context-aware search system, you can configure the number of prior transcript steps to use in the search to suit your specific requirements. The number of prior transcript steps affects the depth of context considered when retrieving search results. In the following table, a dialogue between the user and the AI Agent is presented, showing the agent’s responses in both the enabled and disabled modes of the Context-Aware Search feature.Context-Aware Search is off | Context-Aware Search is on |
---|---|
user: What animals are allowed on a plane? AI Agent: Passengers are allowed to bring a maximum of two animals (dogs or cats) on the plane, either in the cabin or in the cargo hold, as long as they are in approved transport containers. user: Give me more details! AI Agent: Unfortunately, I can’t answer your question. | user: What animals are allowed on a plane? AI Agent: Passengers are allowed to bring a maximum of two animals (dogs or cats) on the plane, either in the cabin or in the cargo hold, as long as they are in an approved transport container. user: Give me more details! AI Agent: The approved transport container for animals on a plane must have a maximum size of 118 cm (55 x 40 x 23 cm) or 47 in (22 x 16 x 9 in) and a total weight, including the animal, not exceeding 8 kg. The container should be escape-proof, leak-proof, bite-proof, and lined with absorbent material. Only one transport container per passenger is allowed in the cabin, and the number of animals per container is limited to one, with exceptions for specific cases. |