Skip to main content

Description

The Add Transcript Step Node allows you to add messages to the conversation transcript. These messages become part of the LLM context and directly influence AI Agent behavior, tool handling, conversation flow, and contextual awareness. Use this Node to inject additional information into the transcript without requiring a new user input. The Node is especially helpful in complex AI Agent architectures.

Restrictions

  • This Node isn’t compatible with the @cognigyRecentConversation and @cognigyRecentUserInputs tags in the LLM Prompt Node.
  • Overuse of empty or redundant transcript entries can reduce clarity and make debugging more difficult.

Parameters

ParameterTypeDescription
RoleSelectorDefines the conversation actor for the added transcript entry. The selected role determines how the LLM interprets the message and how strongly it may influence behavior. Select one of the following roles:
  • User — represents direct input from the end user, such as questions, requests, or instructions. User messages are primary drivers of the model’s responses.
  • Assistant — represents previous AI-generated responses in the conversation. Assistant messages provide context and continuity from the model’s perspective.
  • Agent — represents internal reasoning or planning steps when the LLM operates in agentic mode. Agent messages can influence structured thinking and tool selection within multi-agent workflows.
  • System — defines overarching rules, constraints, persona, and behavioral guidelines. System messages have strong influence over the model’s behavior and establish global instructions.
  • Tool — represents outputs returned from external systems or tools invoked by the AI Agent. Tool messages provide factual results, for example, API responses, that the model uses to continue reasoning or generate an answer.
Only transcript entries included in the active conversation context are visible to the LLM during the request.
TypeSelectorDefines the type of transcript entry. The selected type determines how the message is processed and interpreted within the conversation transcript. Available types depend on the selected option in the Role parameter:
  • Input — used when User is selected. Represents messages provided by the end user.
  • Output — used when Agent is selected and available when Assistant is selected. Represents natural-language responses generated by the assistant or agent and shown to the end user.
  • Tool Call — available when Assistant is selected. Represents structured requests to invoke a function or external tool. Not shown directly to the end user.
  • Debug Log — used when System is selected. Internal system messages for debugging and diagnostics.
  • Tool Answer — used when Tool is selected. Represents responses returned from external tools or functions.
NameCognigyScriptAppears if Tool Call or Tool Answer is selected as Type. This parameter allows you to specify the name of the tool being called, for example, getWeather. The tool name can be used in the LLM context to provide clarity about which tool is being invoked and to help guide the model’s reasoning and response generation.
IDCognigyScriptAppears if Tool Call is selected as Type. This parameter allows you to specify a unique identifier for the tool call, for example, getWeather_12345. The ID can be used to track specific tool calls and their corresponding responses, especially in complex conversations with multiple tool interactions. It can also help ensure that the correct tool answer is associated with the right tool call in the LLM context.
Tool Call IDCognigyScriptAppears if Tool Answer is selected as Type. This parameter allows you to specify the unique identifier of the corresponding tool call, for example, getWeather_12345. The Tool Call ID parameter can be used to link the tool answer back to the specific tool call it is responding to, ensuring that the LLM can correctly associate the response with the original request and maintain coherent reasoning in conversations with multiple tool interactions.
ContentCognigyScriptAppears if Tool Answer is selected as Type. This parameter allows you to specify the content of the tool answer, for example, Location is New York. Date is 2024-06-01. The content can provide the LLM with the necessary information returned from the external tool, which can then be used to generate informed responses or continue reasoning based on the results of the tool call. The parameter helps ensure that the LLM has access to the relevant data needed to process the tool answer effectively.
InputJSONAppears if Tool Call is selected as Type. This parameter allows you to specify the input parameters for the tool call in a structured JSON format, for example, {"index":0,"id":"call_zxXusUexoviWzfwqjNHLfjP4","type":"function","function":{"name":"unlock_account","arguments":{"email":"elena@cognigy.com"}}}. The input data can be used by the LLM to understand the context of the tool call and to generate more accurate responses based on the provided parameters. It also helps ensure that the tool receives the necessary information to execute correctly.
HeaderCognigyScriptAppears if Debug Log is selected as Type. This parameter allows you to specify a header for the debug log entry, for example, API Response Debug. The header can help categorize and identify different debug messages in the transcript, making it easier to analyze and troubleshoot issues during development. The parameter provides a clear label for the type of information being logged.
MessageCognigyScriptAppears if Debug Log is selected as Type. This parameter allows you to specify the content of the debug log message, for example, Received response status: Success. The message can contain detailed information about the system’s internal state, API responses, or other relevant data for diagnosing and resolving issues during the development process. The parameter provides valuable insights into the system’s behavior.
Meta DataJSONAppears if Debug Log is selected as Type. This parameter allows you to specify additional metadata for the debug log entry in a structured JSON format, for example, { "timestamp": "2024-06-01T12:00:00Z", "severity": "info" }. The metadata can provide context about the debug message, such as when it was logged and its severity level, which can help developers prioritize and analyze logs more effectively during troubleshooting.
TextCognigyScriptAppears if Input or Output is selected as Type. Enter the text depending on the type of the message:
  • Input – represents messages from the end user. Intended for questions, requests, instructions, or any data the user provides. These entries serve as the primary input for the LLM and directly influence the AI Agent’s responses. The text here becomes part of the LLM context for understanding user intent.
  • Output – represents messages generated by the AI Agent or human agent.
    • If Assistant is selected as the role: enter natural language responses shown to the user. These responses maintain conversation continuity and provide instructions, explanations, or information.
    • If Agent is selected as the role: enter internal notes, reasoning, or guidance that influence workflow, structured decision-making.
DataJSONAppears if Input or Output is selected as Type. Optional structured metadata that complements the Text field. For example, { "type": "motivational" } can be used to categorize the input or provide context to influence how the system processes the text.

Use Cases

The following use cases illustrate how the Add Transcript Step Node is used:
  • Add External System Results. Inject API responses such as CRM data, ID&V results, or order information into the transcript so the AI Agent can use this data for reasoning.
  • Manage AI Agent Transitions. Add transcript entries to mark milestones, such as completed handovers or transitions between AI Agents.
  • Context Refocusing. Insert targeted messages to highlight new priorities or provide technical context in long-running conversations.
  • Workflow-Based Context Injection. Add follow-up instructions to guide subsequent behavior after deterministic processes—for example, GDPR handling.

Examples

The examples show how different message types from different roles appear in the conversation transcript, how they are processed, and how they influence the conversation. Each role is shown with its corresponding JSON representation in the Input object. The "source": "system" key-value pair indicates that the role was added to the transcript by the system using the Add Transcript Step Node.
End-user messages are processed to detect intent and trigger Tool Calls or Outputs. Metadata can be used to pass structured parameters through the Flow.Parameters:
  • type: Input
  • text: What is the weather like in New York today?
  • data: { "location": "New York", "date": "2024-06-01" }
Example JSON:
{
  "role": "user",
  "type": "input",
  "source": "system",
  "payload": {
    "text": "What is the weather like in New York today?",
    "data": {
      "location": "New York",
      "date": "2024-06-01"
    }
  },
  "id": "62de038b-b1f0-4ca1-b522-93879046342e",
  "traceId": "endpoint-realtimeClient-85035eb7-7271-48ef-aa2d-e01d2a9600bf",
  "timestamp": 1773131963644
}
Outputs can be displayed directly to the user or guide conditional logic in the Flow based on metadata types.Parameters:
  • type: Output
  • text: Your current account balance is $1,245.67
  • data: { "type": "responsive" }
Example JSON:
{
  "role": "assistant",
  "type": "output",
  "source": "system",
  "payload": {
    "text": "Your current account balance is $1,245.67",
    "data": {
      "type": "responsive"
    }
  },
  "id": "c8f1f8dc-7b02-493c-890d-0a292806e213",
  "traceId": "endpoint-realtimeClient-85035eb7-7271-48ef-aa2d-e01d2a9600bf",
  "timestamp": 1773131963645
}
Calls an external tool to fetch data or perform actions. The id property links this call to the Tool Answer for later processing.Parameters:
  • type: Tool Call
  • name: getWeather
  • id: getWeather-123
  • input: { "location": "New York", "date": "2024-06-01" }
Example JSON:
{
  "role": "assistant",
  "type": "toolCall",
  "source": "system",
  "payload": {
    "name": "getWeather",
    "id": "getWeather-123",
    "input": {
      "location": "New York",
      "date": "2024-06-01"
    }
  },
  "id": "21a2c839-207b-4c25-ad9b-804c9c3dc109",
  "traceId": "endpoint-realtimeClient-85035eb7-7271-48ef-aa2d-e01d2a9600bf",
  "timestamp": 1773131963645
}
Represents internal reasoning or planning. Influences workflow decisions but is generally not shown to the end user.Parameters:
  • type: Output
  • text: Verified customer identity and approved the transaction
  • data: { "type": "responsive" }
Example JSON:
{
  "role": "agent",
  "type": "output",
  "source": "system",
  "payload": {
    "text": "Verified customer identity and approved the transaction",
    "data": {
      "type": "responsive"
    }
  },
  "id": "82bc4c22-be0a-454f-a8d4-1c1f53006971",
  "traceId": "endpoint-realtimeClient-85035eb7-7271-48ef-aa2d-e01d2a9600bf",
  "timestamp": 1773131963646
}
For monitoring, logging, and troubleshooting AI Agent execution. Helps developers diagnose errors without exposing them to users.Parameters:
  • type: Debug Log
  • header: Validation Error
  • message: User ID missing in request
  • metadata: { "severity": "info" }
Example JSON:
{
  "role": "system",
  "type": "debugLog",
  "source": "system",
  "payload": {
    "header": "Validation Error",
    "message": "User ID missing in request",
    "metadata": {
      "severity": "info"
    }
  },
  "id": "663bd605-367f-44f9-858c-93911e05878c",
  "traceId": "endpoint-realtimeClient-85035eb7-7271-48ef-aa2d-e01d2a9600bf",
  "timestamp": 1773131963647
}
Contains the result of a tool call. The AI Agent can use this output to craft responses or make further decisions in the workflow.Parameters:
  • type: Tool Answer
  • toolCallId: getWeather-123
  • name: getWeather
  • content: The weather in New York is +10°C
Example JSON:
{
  "role": "tool",
  "type": "toolAnswer",
  "source": "system",
  "payload": {
    "toolCallId": "getWeather-123",
    "name": "getWeather",
    "content": "The weather in New York is +10°C"
  },
  "id": "45a07d1a-f813-48fa-850f-b6c6febcb5bd",
  "traceId": "endpoint-realtimeClient-85035eb7-7271-48ef-aa2d-e01d2a9600bf",
  "timestamp": 1773131963647
}

More Information