
Description
The Add Transcript Step Node allows you to add messages to the conversation transcript. These messages become part of the LLM context and directly influence AI Agent behavior, tool handling, conversation flow, and contextual awareness. Use this Node to inject additional information into the transcript without requiring a new user input. The Node is especially helpful in complex AI Agent architectures.Restrictions
- This Node isn’t compatible with the
@cognigyRecentConversationand@cognigyRecentUserInputstags in the LLM Prompt Node. - Overuse of empty or redundant transcript entries can reduce clarity and make debugging more difficult.
Parameters
| Parameter | Type | Description |
|---|---|---|
| Role | Selector | Defines the conversation actor for the added transcript entry. The selected role determines how the LLM interprets the message and how strongly it may influence behavior. Select one of the following roles:
|
| Type | Selector | Defines the type of transcript entry. The selected type determines how the message is processed and interpreted within the conversation transcript. Available types depend on the selected option in the Role parameter:
|
| Name | CognigyScript | Appears if Tool Call or Tool Answer is selected as Type. This parameter allows you to specify the name of the tool being called, for example, getWeather. The tool name can be used in the LLM context to provide clarity about which tool is being invoked and to help guide the model’s reasoning and response generation. |
| ID | CognigyScript | Appears if Tool Call is selected as Type. This parameter allows you to specify a unique identifier for the tool call, for example, getWeather_12345. The ID can be used to track specific tool calls and their corresponding responses, especially in complex conversations with multiple tool interactions. It can also help ensure that the correct tool answer is associated with the right tool call in the LLM context. |
| Tool Call ID | CognigyScript | Appears if Tool Answer is selected as Type. This parameter allows you to specify the unique identifier of the corresponding tool call, for example, getWeather_12345. The Tool Call ID parameter can be used to link the tool answer back to the specific tool call it is responding to, ensuring that the LLM can correctly associate the response with the original request and maintain coherent reasoning in conversations with multiple tool interactions. |
| Content | CognigyScript | Appears if Tool Answer is selected as Type. This parameter allows you to specify the content of the tool answer, for example, Location is New York. Date is 2024-06-01. The content can provide the LLM with the necessary information returned from the external tool, which can then be used to generate informed responses or continue reasoning based on the results of the tool call. The parameter helps ensure that the LLM has access to the relevant data needed to process the tool answer effectively. |
| Input | JSON | Appears if Tool Call is selected as Type. This parameter allows you to specify the input parameters for the tool call in a structured JSON format, for example, {"index":0,"id":"call_zxXusUexoviWzfwqjNHLfjP4","type":"function","function":{"name":"unlock_account","arguments":{"email":"elena@cognigy.com"}}}. The input data can be used by the LLM to understand the context of the tool call and to generate more accurate responses based on the provided parameters. It also helps ensure that the tool receives the necessary information to execute correctly. |
| Header | CognigyScript | Appears if Debug Log is selected as Type. This parameter allows you to specify a header for the debug log entry, for example, API Response Debug. The header can help categorize and identify different debug messages in the transcript, making it easier to analyze and troubleshoot issues during development. The parameter provides a clear label for the type of information being logged. |
| Message | CognigyScript | Appears if Debug Log is selected as Type. This parameter allows you to specify the content of the debug log message, for example, Received response status: Success. The message can contain detailed information about the system’s internal state, API responses, or other relevant data for diagnosing and resolving issues during the development process. The parameter provides valuable insights into the system’s behavior. |
| Meta Data | JSON | Appears if Debug Log is selected as Type. This parameter allows you to specify additional metadata for the debug log entry in a structured JSON format, for example, { "timestamp": "2024-06-01T12:00:00Z", "severity": "info" }. The metadata can provide context about the debug message, such as when it was logged and its severity level, which can help developers prioritize and analyze logs more effectively during troubleshooting. |
| Text | CognigyScript | Appears if Input or Output is selected as Type. Enter the text depending on the type of the message:
|
| Data | JSON | Appears if Input or Output is selected as Type. Optional structured metadata that complements the Text field. For example, { "type": "motivational" } can be used to categorize the input or provide context to influence how the system processes the text. |
Use Cases
The following use cases illustrate how the Add Transcript Step Node is used:- Add External System Results. Inject API responses such as CRM data, ID&V results, or order information into the transcript so the AI Agent can use this data for reasoning.
- Manage AI Agent Transitions. Add transcript entries to mark milestones, such as completed handovers or transitions between AI Agents.
- Context Refocusing. Insert targeted messages to highlight new priorities or provide technical context in long-running conversations.
- Workflow-Based Context Injection. Add follow-up instructions to guide subsequent behavior after deterministic processes—for example, GDPR handling.
Examples
The examples show how different message types from different roles appear in the conversation transcript, how they are processed, and how they influence the conversation. Each role is shown with its corresponding JSON representation in the Input object. The"source": "system" key-value pair indicates that the role was added to the transcript by the system using the Add Transcript Step Node.
User — Input
User — Input
End-user messages are processed to detect intent and trigger Tool Calls or Outputs. Metadata can be used to pass structured parameters through the Flow.Parameters:
type: Inputtext: What is the weather like in New York today?data:{ "location": "New York", "date": "2024-06-01" }
Assistant — Output
Assistant — Output
Outputs can be displayed directly to the user or guide conditional logic in the Flow based on metadata types.Parameters:
type: Outputtext: Your current account balance is $1,245.67data:{ "type": "responsive" }
Assistant — Tool Call
Assistant — Tool Call
Calls an external tool to fetch data or perform actions. The
id property links this call to the Tool Answer for later processing.Parameters:type: Tool Callname: getWeatherid: getWeather-123input:{ "location": "New York", "date": "2024-06-01" }
Agent — Output
Agent — Output
Represents internal reasoning or planning. Influences workflow decisions but is generally not shown to the end user.Parameters:
type: Outputtext: Verified customer identity and approved the transactiondata:{ "type": "responsive" }
System — Debug Log
System — Debug Log
For monitoring, logging, and troubleshooting AI Agent execution. Helps developers diagnose errors without exposing them to users.Parameters:
type: Debug Logheader: Validation Errormessage: User ID missing in requestmetadata:{ "severity": "info" }
Tool — Tool Answer
Tool — Tool Answer
Contains the result of a tool call. The AI Agent can use this output to craft responses or make further decisions in the workflow.Parameters:
type: Tool AnswertoolCallId: getWeather-123name: getWeathercontent:The weather in New York is +10°C