Skip to main content

Description

This Node uses a Large Language Model (LLM) to extract entities, such as product codes, booking codes, and customer IDs, from a given string. The LLM Entity Extract Node is suitable for both chat and voice use cases. In a chat interface, it can process text inputs, while in a voice interface, it can recognize and analyze spoken language. Before using this Node, set the Generative AI provider in the Settings. You can configure the Node to either use the default model defined in the Settings or choose a specific configured LLM. To output the result, below the LLM Entity Extract Node, add a Say Node. In the Text field of the Say Node, use a key specified in the Storage Options, for example, {{input.extractedEntity}}.

Parameters

ParameterTypeDescription
Large Language ModelListSelect a model or use the default one.
Entity NameCognigyScriptThe name of the entity to extract. For example, customerID.
Entity DescriptionCognigyScriptA sentence which describes the entity. For example, An alphanumeric string of 6 characters, e.g. ABC123 or 32G5FD.
Example InputTextExamples of text inputs. For example, My ID is AB54EE, is that ok?, That would be ah bee see double 4 three, I guess it's 49 A B 8 K. Alternatively, you can click Show JSON Editor and add input examples in the code field.
Extracted EntityCognigyScriptExamples of extracted entities. For example, AB54EE, ABC443, 49AB8K.

JSON Input Examples

{
  "My ID is AB54EE, is that ok?": "AB54EE",
  "That would be ah bee see double 4 three": "ABC443",
  "I guess it's 49 A B 8 K": "49AB8K"
}
ParameterTypeDescription
TemperatureIndicatorThe appropriate sampling temperature for the model. Higher values mean the model will take more risks.
TimeoutNumberThe maximum amount of milliseconds to wait for a response from the Generative AI Provider.
Response FormatSelectChoose the format for the model’s output result. You can select one of the following options:
  • None — no response format is specified, or do not request with an LLM provider that does not accept any response format or does not support it or could use provider’s default in-built response format. This option is selected by default.
  • Text — the model returns messages in text format.
  • JSON Object — the model returns messages in JSON format. In contrast to the LLM Prompt Node, you don’t need to return the model to return the JSON object. It’s already configured to do so. Note that not all LLMs may support this parameter, which could cause model calls to fail. For more information, refer to the LLM provider’s API documentation.
ParameterTypeDescription
How to handle the resultSelectDetermine how to handle the prompt result:
  • Store in Input — stores the result in the Input object.
  • Store in Context — stores the result in the Context object.
Input Key to store ResultCognigyScriptThe parameter appears when Store in Input is selected. The result is stored in the extractedEntity Input object by default. You can specify another key.
Context Key to store ResultCognigyScriptThe parameter appears when Store in Context is selected. The result is stored in the extractedEntity Context object by default. You can specify another key.
When using the Interaction Panel, you can trigger two types of debug logs. These logs are only available when using the Interaction Panel and are not intended for production debugging. You can also combine both log types.
ParameterTypeDescription
Show Token CountToggleSend a debug message containing the input, output, and total token count. The message appears in the Interaction Panel when Debug Mode is enabled. Cognigy.AI uses the GPT-3 tokenizer algorithm, so actual token usage may vary depending on the model used. The parameter is inactive by default.
Log Request and CompletionToggleSend a debug message containing the LLM provider and the subsequent completion. The message appears in the Interaction Panel when Debug Mode is enabled. The parameter is inactive by default.

More Information

I