Skip to main content
Updated in 2025.23

Description

The AI Agent Node assigns a job to an AI Agent, provides instructions and tools for that job, and access to the knowledge the AI Agent can use when holding a conversation with a user. To configure this Node, follow these steps:
  1. Define an AI Agent job.
  2. Define the tool actions to perform this task.

Parameters

Parent Node

This configuration assigns a job to an AI Agent, defines its role and responsibilities, and provides additional instructions or context to guide its actions.
ParameterTypeDescription
AI AgentSelectorSelect the AI Agent.
Job NameCognigyScriptSpecify the name of the job. For example, Customer Support Specialist.
Job DescriptionCognigyScriptProvide a description of the job responsibilities to guide the AI Agent’s interactions. For example, Assist customers with product issues, escalate complex cases, and provide guidance on best practices.
Instructions and ContextToggleAdd specific instructions or context as a system message to help the AI Agent better fulfill the job requirements. For example, Stay professional and friendly; focus on problem-solving and clarity. These instructions are considered in addition to those specified in the AI Agent creation settings.
ParameterTypeDescription
Long-Term Memory InjectionSelectorAllow the AI Agent to access Contact Profile information for the current user. Select one of the following options:
  • None – no memory.
  • Inherit from AI Agent – use the settings specified in the AI Agent creation settings.
  • Inject full Contact Profile – use all information from the Contact Profile.
  • Inject Contact Memories only – use information only from the Memories field in the Contact Profile.
  • Inject selected Profile fields – use information from specific fields in the Contact Profile.
Selected Profile FieldsTextThis parameter appears when the Inject selected Profile fields option is enabled. Enter specific fields from the Contact Profile for targeted data use. Specify the field using the Profile keys format and press Enter to apply it.
Short-Term Memory InjectionCognigyScriptSpecify a static string or a dynamic value via CognigyScript to make available to the AI Agent in the current turn.
ParameterTypeDescription
Knowledge InjectionSelectorUse the Knowledge AI feature for the AI Agent. Select one of the following options:
  • Never — do not use the Knowledge Stores.
  • When Required — let the AI Agent decide when querying the Knowledge Stores is required to help the user.
  • Once for Each User Input — query the Knowledge Store(s) after each user input. Note that executing a query on every user input can lead to increased costs and latency.
Use AI Agent KnowledgeToggleAppears when you select When Required or Once for Each User Input. Enable to use the Knowledge Store configured in the AI Agent. The Knowledge Store configured within the AI Agent creation settings will be used.
Use Job KnowledgeToggleAppears when you select When Required or Once for Each User Input. Enable this option to configure a specific Knowledge Store for this particular job, allowing the AI Agent to access job-specific data or resources.
Job Knowledge StoreSelectorAppears when you select When Required or Once for Each User Input and Use Job Knowledge is enabled. Select a specific Knowledge Store for this AI Agent’s job.
Top KSliderAppears when you select When Required or Once for Each User Input. Specify how many knowledge chunks to return. Providing more results gives the AI Agent additional context but may increase noise and token usage.
Source TagsCognigyScriptAppears when you select When Required or Once for Each User Input. Tags refine the scope of your knowledge search, including only the most relevant sections of the knowledge base to improve accuracy. Before specifying tags, ensure they were provided during the creation of the Knowledge Sources. Add tags by entering each separately and pressing Enter. Max 5 tags. When multiple Source Tags are specified, the Search Extract Output Node defaults to the AND operator, meaning it only considers Sources that have all specified tags. To change this behavior, adjust the Match Type for Source Tags parameter.
Match Type for Source TagsSelectAppears when you select When Required or Once for Each User Input. Defines the operator for filtering Knowledge Sources by tags:
  • AND — default; requires all tags to match. Example: filtering by S-a and S-b only includes Sources with both tags.
  • OR — requires at least one tag to match. Example: filtering by S-a or S-b includes Sources with either tag.
Generate Search PromptToggleAppears when you select Once for Each User Input. Enabled by default. Generates a context-aware search prompt before executing the knowledge search. May increase cost and latency.
ParameterTypeDescription
How to handle the resultSelectDetermine how to handle the prompt result:
  • Store in Input — stores the AI Agent result in the Input object. To print the prompt result, refer to the configured Context key in a Say Node or enable the Output result immediately option.
  • Store in Context — stores the result in the Context object. To print the prompt result, refer to the configured Context key in a Say Node or enable the Output result immediately option.
  • Stream to Output — streams the result directly into the output. Chunks from the prompt response will be output into the conversation chat as soon as a Stream Buffer Flush Token is matched. You don’t need to use the AI Agent Output token and a Say Node. By default, the result is not stored in the Input or Context. To store it, enable Store Copy in Input.
Input Key to store ResultCognigyScriptAppears when Store in Input or Stream to Output is selected. The result is stored in input.aiAgentOutput by default. You can specify another value, but the AI Agent Output token will not work if the value is changed.
Context Key to store ResultCognigyScriptAppears when Store in Context is selected. The result is stored in context.aiAgentOutput by default. You can specify another key.
Stream Buffer Flush TokensText ArrayAppears when Stream to Output is selected. Defines tokens that trigger the stream buffer to flush to the output. Tokens can be punctuation marks or symbols, such as \n.
Output result immediatelyToggleAppears when Store in Input or Store in Context is selected. Allows immediate output of results without using the Say Node and AI Agent Output token.
Store Copy in InputToggleAppears when Stream to Output is selected. In addition to streaming the result, stores a copy in the Input object by specifying a value in the Input Key to store Result field.
ParameterTypeDescription
Voice SettingSelectConfigure the voice settings for the AI Agent Job. This parameter determines how the AI Agent selects the voice for text-to-speech (TTS) output. Select one of the following options:
  • Inherit from AI Agent — use the voice settings defined in the AI Agent creation settings.
  • Use Job Voice — apply custom voice settings specific to this job, allowing the AI Agent to adapt to the particular role it performs. For example, for a marketing AI Agent, the voice can be engaging, friendly, and persuasive. For customer support, it might be neutral, empathetic, and formal.
TTS VendorDropdownSelect a TTS vendor from the list or add a custom one.Note: The AI Agent Node doesn’t support TTS Labels to distinguish configurations from the same vendor. To use TTS Labels, add a Set Session Config Node before the AI Agent Node in the Flow editor.
Custom (Vendor)CognigyScriptAppears when Custom is selected in TTS Vendor. Specify the custom TTS Vendor. For preinstalled providers, use lowercase, for example, microsoft, google, aws. For custom providers, use the name defined on the Speech Service page in the Voice Gateway Self-Service Portal.
TTS LanguageDropdownDefine the language of the AI Agent output. Ensure it aligns with the preferred language of the end user.
Custom (Language)CognigyScriptAppears when Custom is selected in TTS Language. Specify the output language. Format depends on the TTS vendor; check vendor documentation. Typical format: de-DE, fr-FR, en-US.
TTS VoiceDropdownDefine the voice for AI Agent output. Customize tone, gender, style, and regional specifics to align conversations with your brand and audience.
Custom (Voice)CognigyScriptAppears when Custom is selected in TTS Voice. Specify a custom voice, often required for region-specific voices. Format depends on TTS Vendor and typically follows language-region-VoiceName, for example, de-DE-ConradNeural, en-US-JennyNeural).
TTS LabelCognigyScriptAlternative name for the TTS vendor, as specified in the Voice Gateway Self-Service Portal. Use this when multiple speech services from the same vendor exist.
Disable TTS Audio CachingToggleDisables TTS audio caching. By default, the setting is deactivated. In this case, previously requested TTS audio results are stored in the AI Agent cache. When a new TTS request is made and the audio text has been previously requested, the AI Agent retrieves the cached result instead of sending another request to the TTS provider.
When the setting is activated, the AI Agent caches TTS results but doesn’t use them. In this case, each request is directly sent to your speech provider.
Note that disabling caching can increase TTS costs. For detailed information, contact your speech provider.
ParameterTypeDescription
Tool ChoiceSelectorIf supported by your LLM Model, determines how tools should be selected by the AI Agent:
  • Auto — tools (or none) are automatically selected by the AI Agent when needed.
  • Required — the AI Agent will always use one of its Tools.
  • None — the AI Agent won’t use a tool.
Use Strict ModeToggleWhen enabled, strict mode (if supported by the LLM provider) ensures that arguments passed to a tool call precisely match the expected parameters. This helps prevent errors but may slightly delay responses, especially during the first call after making changes.
ParameterTypeDescription
Process ImagesToggleEnables the AI Agent to read and understand image attachments. Ensure that your LLM provider supports image processing (refer to your provider’s documentation). Also verify that attachments are supported and activated in your Endpoint, such as Webchat.
Images in TranscriptSelectorConfigures how images older than the last turn are handled to reduce token usage:
  • Minify — reduces the size of these images to 512×512 px.
  • Drop — excludes the images.
  • Keep — sends the maximum size (consumes more tokens).
Limitations and token consumption depend on the LLM used.
ParameterTypeDescription
LLMSelectorSelect a model that supports the AI Agent Node feature. The selected Default model is the one specified in Settings > Generative AI Settings of your Project. Choose the model you added earlier while configuring the Agentic AI feature. This model will manage your AI Agent.
AI Agent Base VersionSelectorSelect the base version of the AI Agent to use:
  • Fixed Version — choose a specific version, for example, 1.0, to ensure stability and avoid potential breaking changes. Use this version in production environments or critical workflows. The version dropdown will be updated as future AI Agent Node versions are released.
  • Latest — use the most recent version of the AI Agent Node. This gives access to the latest features but may require manual updates if breaking changes occur.
When upgrading to a fixed version or switching to the latest, always test your AI Agent carefully to ensure compatibility.
TimeoutNumberDefine the maximum number of milliseconds to wait for a response from the LLM provider.
Maximum Completion TokensSliderSet the maximum number of tokens that can be used during a process to manage costs. If the limit is too low, the output may be incomplete. For example, setting 100 tokens roughly corresponds to 100 words, depending on language and tokenization.
TemperatureSliderDefine the sampling temperature, ranging from 0 to 1. Higher values, for example, 0.8. make output more random; lower values, for example, 0.2, make it more focused and deterministic.
Include Rich Media ContextToggleControls whether context is added to the prompt. In this case, context refers to text extracted from rich media such as Text with Buttons, Quick Replies, and other types. This text provides AI Agents with additional information, improving their responses.

If the Textual Description parameter in the Say, Question, or Optional Question Node is filled, the context is taken only from this parameter. If the Textual Description parameter is empty, the context is taken from the button titles and alt text in the rich media. By default, the Include Rich Media Context parameter is active. When this parameter is inactive, no context is added.

Examples:
  • If Textual Description is filled:

    Textual Description: Select your preferred delivery option: Standard Delivery or Express Delivery.

    Quick Replies’ buttons: Standard Delivery, Express Delivery.

    Context added to the prompt: Select your preferred delivery option: Standard Delivery or Express Delivery.

  • If Textual Description is empty:

    Textual Description: empty.

    Quick Replies’ buttons: Standard Delivery, Express Delivery.

    Context added to the prompt: Standard Delivery, Express Delivery.

  • If Include Rich Media Context is inactive:

    No context is added to the prompt.

ParameterTypeDescription
Log to System LogsToggleLog errors to the system logs. They can be viewed on the Logs page of your Project. This parameter is inactive by default.
Store in InputToggleStore errors in the Input object.
Select Error Handling ApproachSelectChoose one of the Error Handling options:
  • Stop Flow Execution — terminate the current Flow execution.
  • Continue Flow Execution — allow the Flow to continue executing, bypassing the error and proceeding to the next steps.
  • Go to Node — redirect the workflow to a specific Node in the Flow, useful for error recovery or custom error handling.
Select FlowSelectAppears when Go to Node is selected. Choose a Flow from the available options.
Select NodeSelectAppears when Go to Node is selected. Choose a Node from the available options.
Error Message (optional)CognigyScriptAdd an optional message to the output if the AI Agent Node fails.
ParameterTypeDescription
Log Job ExecutionToggleSend a debug message with the current AI Agent Job configuration. The message appears in the Interaction Panel when debug mode is enabled. The parameter is active by default.
Log Knowledge ResultsToggleSend a debug message containing the result from a knowledge search. The message appears in the Interaction Panel when debug mode is enabled. The parameter is inactive by default.
Show Token CountToggleSend a debug message containing the input, output, and total token count. The message appears in the Interaction Panel when debug mode is enabled. Cognigy.AI uses the GPT-3 tokenizer algorithm, so actual token usage may vary depending on the model used. The parameter is inactive by default.
Log System PromptToggleSend a debug message containing the system prompt. The message appears in the Interaction Panel when debug mode is enabled. The parameter is inactive by default.
Log Tool DefinitionsToggleSend a debug message containing information about the configured AI Agent tools. The message appears in the Interaction Panel when debug mode is enabled. The parameter is inactive by default.
Log LLM LatencyToggleSend a debug message containing key latency metrics for the request to the model, including the time taken for the first output and the total time to complete the request. The message appears in the Interaction Panel when debug mode is enabled. The parameter is inactive by default.
Send request logs to WebhookToggleSend the request sent to the LLM provider and the subsequent completion to a webhook service, including metadata, the request body, and custom logging data. With this parameter, you can use a webhook service to view detailed logs of the request to the LLM. The parameter is inactive by default.
Webhook URLCognigyScriptEnter the URL of the webhook service to send the request logs to.
Custom Logging DataCognigyScriptEnter custom data to send with the request to the webhook service.
Condition for Webhook LoggingCognigyScriptEnter the condition for the webhook logging.
Webhook HeadersInput fieldsEnter the headers to send with the request to the webhook service. Use the Key and Value fields to enter a header. The Value field supports CognigyScript. After entering the header key, new empty Key and Value fields are automatically added, in case you need to add more headers. Alternatively, you can click Show JSON Editor and add input examples in the code field.

Child Nodes

Tool

Tools are child Nodes of AI Agent Nodes. They define the actions the AI Agent can take. If an AI Agent wants to execute the tool, the branch below the child Node is executed. At the end of a tool branch, it is advisable to use a Resolve Tool Action Node to return to the AI Agent. Clicking the Tool Node lets you define a tool, set its parameters, and allows debugging by enabling detailed messages about the tool’s execution.
ParameterTypeDescription
Tool IDCognigyScriptProvide a meaningful name as a Tool ID. This ID can contain only letters, numbers, underscores (_), or dashes (-). For example, update_user-1.
DescriptionCognigyScriptProvide a detailed description of what the tool does, when it should be used, and its parameters.
Configure the parameters that the AI Agent will collect before the tool is called. You can switch between the Graphical and JSON editors. When editing the JSON, follow the JSON Schema specification.
ParameterTypeDescription
Use ParametersToggleActivate this toggle to add parameters in addition to the tool name and description. The AI Agent will collect all data it needs and call a Tool with these parameters filled as arguments. These values can be accessed directly in the input.aiAgent.toolArgs object.
NameTextSpecify the name of the parameter. The name should be clear and concise, and describe the purpose of the parameter.
TypeSelectorSelect a type of the parameter:
  • String — a sequence of characters. For example, "hello", "123".
  • Number — a numerical value, which can be either an integer (for example,5) or a floating point number (for example, 3.14).
  • Boolean — a logical value representing true or false.
  • Array — a collection of elements, which can contain multiple values of any type. For example, ["apple", "banana", "cherry"].
  • Object — a collection of key-value pairs, where each key is a string and the value can be of any type. For example, {"name": "John", "age": 30}.
DescriptionCognigyScriptExplain what the parameter means by providing a brief description of the parameter’s usage.
Enum (optional)EnumDefine a set of values that the parameter can accept. The enum restricts the input to one of the specified values, ensuring only valid options are chosen. The enum is only available for string-type parameters in the Graphical editor. For other types, use the JSON editor. May not be supported by all LLM providers.
Add ParameterButtonAdd a new parameter.
ParameterTypeDescription
Debug Message when calledToggleEnable the output of a debug message when the tool is called to provide detailed information about the tool call.
ParameterTypeDescription
ConditionCognigyScriptThe tool will be enabled only if the condition is evaluated as true. If false, the tool isn’t part of the AI Agent’s Tools within this execution. For example, when using the unlock_account tool, you can specify a condition like context.accountStatus === "locked". This checks the value in the context, and if it is missing or different, the tool will not be enabled.

MCP Tool

MCP Tool Nodes are child Nodes of AI Agent Nodes. The MCP Tool Nodes connect to a remote MCP server to load tools that the AI Agent can execute. If an AI Agent wants to execute one of the loaded tools, the branch below the MCP Tool Node is triggered. Clicking the MCP Tool Node lets you define the connection, filter loaded tools, and allows debugging by enabling detailed messages about the tool’s execution.
ParameterTypeDescription
NameCognigyScriptProvide a name for the MCP connection. This name helps you identify the source of the loaded tool.
MCP Server SSE URLCognigyScriptProvide the URL to an SSE (Server-Sent Events) endpoint from a remote MCP server. Ensure that you connect only to trusted MCP servers.
TimeoutSliderSet the timeout time for the MCP connection in seconds.
ParameterTypeDescription
Debug Loaded ToolsToggleEnable this parameter to display a debug message listing all tools loaded from the MCP server. The debug message also includes tools filtered out in the Advanced section. This parameter shows whether tools were loaded from the cache or directly from the MCP server.
Debug with ParametersToggleEnable this parameter to include the Tool Parameters in the debug message.
Debug calling ToolToggleEnable the output of a debug message when the tool is called to provide detailed information about the tool call.
ParameterTypeDescription
Cache ToolsToggleDisables caching of loaded tools while developing. Ensure that caching is enabled in production for performance reasons. The caching time is 10 minutes. In debug mode, you can verify whether the cache was used. To do this, make sure Debug Loaded Tools is enabled. For more information, see the Debug Settings section.
ConditionCognigyScriptSets the condition under which the tool will be activated. If the condition is evaluated as false, the tool isn’t part of the AI Agent’s Tools during execution. For example, when using the unlock_account tool, you can specify a condition like context.accountStatus === "locked". This checks the value in the context, and if it is missing or different, the tool will not be enabled.
Tool FilterSelectControls if tools should be excluded from execution. You can select one of the following options:
  • None — no tool filtering is applied, and all tools are available for execution. This option is selected by default.
  • Whitelist — only tools on the list are allowed for execution, while all other tools are excluded.
  • Blacklist — tools on the list are excluded from execution, while all other tools remain available.
BlacklistCognigyScriptThis parameter appears if you select Blacklist in Tool Filter. Specify the tools that should be blocked from execution. Specify only one tool per field.
WhitelistCognigyScriptThis parameter appears if you select Whitelist in Tool Filter. Specify the tools you want to allow for execution. Specify only one tool per field.
Custom Headers-Sets custom authentication headers to send with the request to the MCP server. Use the Key and Value fields to enter a header. The Value field supports CognigyScript. After entering the header key, new empty Key and Value fields are automatically added, in case you need to add more headers. Alternatively, you can click Show JSON Editor and enter the headers in the code field.

Call MCP Tool

In the Flow editor, when you add an MCP Tool Node, a Call MCP Tool Node is automatically created below it. These two Nodes work together to define and execute the chosen tool. The Call MCP Tool Node sets the actual execution point of the chosen tool. This way, you can verify or modify the tool call arguments in the input.aiAgent.toolArgs object, or add a Say Node before the tool call. When the Call MCP Tool Node is executed, the tool call is sent to the remote MCP server, where the Tool is executed remotely with any arguments set by the AI Agent. To return the tool result to the AI Agent, the Resolve Immediately setting can be enabled to send the full result returned from the remote MCP server to the AI Agent. As an alternative, use a Resolve Tool Action Node to return a specific result to the AI Agent.
ParameterTypeDescription
Resolve ImmediatelyToggleEnable this parameter to immediately resolve the tool action with the full result as the tool answer.
ParameterTypeDescription
How to handle the resultSelectDetermine how to handle the MCP tool call result:
  • Store in Input — stores the result in the Input object.
  • Store in Context — stores the result in the Context object.
Input Key to store ResultCognigyScriptThe parameter appears when Store in Input is selected. The result is stored in the input.aiAgent.toolResult object by default. You can specify another value, but the MCP Tool Result Token won’t work if the value is changed.
Context Key to store ResultCognigyScriptThe parameter appears when Store in Context is selected. The result is stored in the context.aiAgent.toolResult object by default.
ParameterTypeDescription
Debug Tool ResultToggleEnable the output of a debug message with the tool call result after a successful call.

Knowledge Tool

The Knowledge Tool Node is a child Node of the AI Agent Node. It allows the AI Agent to directly access Knowledge Stores to provide context-aware responses. In the Knowledge Tool Node, you can select the Knowledge Store to search, Source Tags to refine the search, and allows for detailed debug messages about the tool’s execution.
ParameterTypeDescription
Knowledge StoreSelectorSelect a Knowledge Store.
Tool IDCognigyScriptEnter a name as a tool ID. This ID can contain only letters, numbers, underscores (_), or dashes (-). If you use more than one Knowledge tool, the tool ID provides context to trigger the correct Knowledge tool. In this case, enter a clear tool ID for each Knowledge tool, for example, search_appliances
DescriptionCognigyScriptEnter instructions to guide the AI Agent to call the Knowledge tool. The description field provides context to trigger the Knowledge tool. If you want to use more than one Knowledge tool, enter clear instructions for the cases when each Knowledge tool should be used. For example, Find the answer to prompts or questions about appliances by searching the attached data sources. Use this tool when a customer asks about appliance items such as washing machines, dryers, and other household appliances. Focus exclusively on a knowledge search and does not execute tasks like small talk, calculations, or script running.
ParameterTypeDescription
Debug Message when calledToggleEnable the output of a debug message when the tool is called to provide detailed information about the tool call.
ParameterTypeDescription
Top KSliderSet how many Knowledge Chunks to return. Providing more results gives the AI Agent additional context, but it also increases noise and token usage.
Store LocationSelectorSelect whether and where to store the knowledge search results. Select one of the following options:
  • Don’t store — the content isn’t stored. This option is set by default.
  • Input — the content is stored in the Input object.
  • Context — the content is stored in the Context object.
Input Key to store resultCognigyScriptAppears when Store Location is set to Input. The property in the Input object where the result is stored. For example, input.knowledgeSearch.
Context Key to store resultCognigyScriptAppears when Store Location is set to Context. The property in the Context object where the result is stored. For example, context.knowledgeSearch.
Source TagsCognigyScriptEnter Knowledge Source Tags to refine the scope of your knowledge search, including only the most relevant Knowledge Chunks in the Knowledge Store. Before entering tags, ensure they are included in the Knowledge Sources. Add tags by entering each separately and pressing Enter. Max 5 tags. When multiple Source Tags are specified, the Search Extract Output Node defaults to the AND operator, meaning it only considers Sources that have all specified tags. To change this behavior, adjust the Match Type for Source Tags parameter.
Match type for Source TagsSelectorThe operator to filter Knowledge Sources by Source Tags. Select one of the following options:
  • AND — the default value, requires all Source Tags to match across multiple Knowledge Sources. Consider the following example: there are Knowledge Sources with Source Tags S-a, S-b, and S-c. When you use the AND operator to filter by S-a and S-b, only Sources with both Tags S-a and S-b are included in the search results.
  • OR — requires at least one Source Tag to match across multiple Knowledge Sources. Consider the following example: there are Knowledge Sources with Source Tags S-a, S-b, and S-c. When you use the OR operator to filter by S-a or S-b, any Source with either Tag S-a or S-b is included in the search results.
ConditionCognigyScriptThe Knowledge tool is activated only if the condition is evaluated as true. If false, the tool isn’t included in the current execution. For example, when using the Knowledge tool, you can enter a condition such as context.productCategory === "appliances". This checks the value in the context, and if it is missing or different, the tool isn’t activated.

Send Email Tool

The Send Email tool lets your AI Agent send emails directly to users. The Send Email tool uses the same configuration and restrictions as the Email Notification Node but adds more flexibility. While the Email Notification Node sends emails at a fixed step in a Flow, the Send Email tool allows the AI Agent to send emails dynamically, based on user input, conversation context, or instructions. This tool makes automation more flexible and minimizes extra Flow steps.
ParameterTypeDescription
Tool IDCognigyScriptProvide a meaningful name as a tool ID. This ID can contain only letters, numbers, underscores (_), or dashes (-). The default name is send_email.
DescriptionCognigyScriptProvide a detailed description of what the tool should do. The default description is Create and send a new email message.
Recipient TO Email AddressesCognigyScriptA comma-separated list of email addresses to which the email will be sent.
ParameterTypeDescription
Debug Message when calledToggleEnable the output of a debug message when the tool is called to provide detailed information about the tool call.
ParameterTypeDescription
ConditionCognigyScriptThe tool will be enabled only if the condition is evaluated as true. If false, the tool isn’t part of the AI Agent’s Tools within this execution. For example, when using the unlock_account tool, you can specify a condition like context.accountStatus === "locked". This checks the value in the context, and if it is missing or different, the tool will not be enabled.

Handover to AI Agent Tool

The Handover to AI Agent tool lets you transfer a conversation to another AI Agent in the same or a different Flow. This approach ensures the conversation keeps its context, different AI Agents can handle specific tasks, and multi-step conversations run smoothly across Flows.
ParameterTypeDescription
Tool IDCognigyScriptProvide a meaningful name as a tool ID. This ID can contain only letters, numbers, underscores (_), or dashes (-). The default name is handover_to_ai_agent.
DescriptionCognigyScriptDescribe what the receiving AI Agent should do. For example: Handle product recommendations, Perform technical troubleshooting, or Assist with billing questions. Be specific so the handover rules clearly indicate when this AI Agent should take over.
Select FlowSelectorSelect the target Flow to hand over to. By default, all Flows in the Project are displayed in the list. The selected Flow must contain an AI Agent Node.
Select NodeSelectorSelect the AI Agent Node to hand over to..
ParameterTypeDescription
Debug Message when calledToggleEnable the output of a debug message when the tool is called to provide detailed information about the tool call.
ParameterTypeDescription
ConditionCognigyScriptThe tool will be enabled only if the condition is evaluated as true. If false, the tool isn’t part of the AI Agent’s Tools within this execution. For example, when using the unlock_account tool, you can specify a condition like context.accountStatus === "locked". This checks the value in the context, and if it is missing or different, the tool will not be enabled.

Examples

Tool

In this example, the unlock_account tool unlocks a user account by providing the email and specifying the reason for the unlocking. Parameter configuration in JSON:
{
  "type": "object",
  "properties": {
    "email": {
      "type": "string",
      "description": "User's login email for their account."
    }
  },
  "required": ["email"],
  "additionalProperties": false
}
where:
  • type — the type for a tool parameter schema, which must always be object.
  • properties — defines the parameters for the tool configuration:
    • email — a required tool parameter for unlocking the account.
      • type — defines the data type for the tool parameter.
      • description — a brief explanation of what the property represents.
  • required — lists email as a required parameter, ensuring that this value is always provided when the tool is called.
  • additionalProperties — ensures that the input contains only the email tool parameter, and no others are allowed.

MCP Tool and Call MCP Tool

Use Zapier’s Remote MCP server

You can create a custom MCP server with personalized tools by using one of the provided SDKs. For a quicker setup, you can use a third-party provider. For example, Zapier allows you to configure your MCP server, which can be connected to multiple application APIs. To use Zapier as a remote MCP server, follow these steps:
  1. Log in to your Zapier account, go to the MCP settings page, and configure your MCP server.
  2. Copy the SSE URL and paste it into the MCP Server SSE URL field of your MCP Tool Node.
  3. In the Zapier MCP settings, create an action to connect to various APIs. For example, you can create a Zapier action to automatically generate a Google Doc.
Once the setup is complete, the configured MCP Actions will be loaded when the AI Agent is executed. You will see the following debug message in the Interaction Panel, indicating the result of the tool call after a successful execution:
AI Agent: MCP Tool
Fetched tools from MCP Tool "zapier"

- google_docs_create_document_from_: Create a new document from text. Also supports limited HTML.
  Parameters:
  - title (string): Document Name
  - file (string): Document Content

Knowledge Tool

In this example, you have two Knowledge tools to search two different Knowledge Stores. Each Knowledge Store refers to a different product category, in this example, appliances and furniture. To guide the AI Agent to use the correct Knowledge tool, enter a clear tool ID and description:
  • Knowledge Tool 1:
    • Tool ID: search_appliances
    • Description: Find the answer to prompts or questions about appliances by searching the attached data sources. Use this tool when a customer asks about appliance items such as washing machines, dryers, and other household appliances. Focus exclusively on a knowledge search and does not execute tasks like small talk, calculations, or script running.
  • Knowledge Tool 2:
    • Tool ID: search_furniture
    • Description: Find the answer to prompts or questions about furniture by searching the attached data sources. Use this tool when a customer asks about furniture items such as sofas, tables, and other items. Focus exclusively on a knowledge search and does not execute tasks like small talk, calculations, or script running.

Get AI Agent Jobs and Tools via API

You can retrieve all job configurations and associated tools for a specific AI Agent via the Cognigy.AI API GET /v2.0/aiagents/{aiAgentId}/jobs request. The response includes each job’s configuration details and a list of available tools.

More Information