Skip to main content
This guide shows how to configure Agent Copilot for real-time translation in a voice-to-chat application. In this example, the AI Agent receives calls, recognizes the language of the caller, and hands them over to human agents. The human agent can chat and the user receives voice responses in their own language, both in their respective languages. This guide includes configuring the following:
  • AI Agent Flow — configures the AI Agent to receive calls, recognize the caller’s language, and hand them over to human agents.
  • Agent Copilot Flow — configures the Agent Copilot workspace and the real-time translation widget.
For a comprehensive guide to set up a voice-to-chat application with Agent Copilot, see Getting Started with Agent Copilot for Voice-to-Chat.

Prerequisites

Configure Real-Time Translation for Agent Copilot

Configure the AI Agent Flow

1

Create a Flow and Add a Say Node for Greeting

  1. In Build > Flows, create a Flow and give it a name, for example, Real-Time Translation Agent Flow.
  2. Add a Say Node and configure the following settings:
    • Text — enter a greeting message, for example, Hello! Welcome to our customer service line!. Save the Node.
2

Add a Question Node for Language Recognition

  1. Below the Say Node, add a Question Node and configure the following settings:
    1. Question Type — select Text.
    2. Click plus-transparent and select Voice Gateway from the list.
    3. Text — enter What is your preferred language?.
    4. Activate Set Activity Parameters.
    5. In the Recognizer (STT) section, select your speech provider from the STT Vendor list.
    6. STT Language — select the Flow language, for example, English (United States).
    7. Activate Recognize Language and select the languages you want to recognize, for example, German (Germany), Portuguese (Brazil), from the Alternative Language 1 and Alternative Language 2 lists. Save the Node.
3

Add a Lookup Node for Language-Specific Voice Output

  1. Below the Question Node, add a Lookup Node and configure the following settings:
    • Type — select CognigyScript.
    • Operator — enter the following:
    input.data.payload?.speech?.language_code
    
    This value verifies the language code for language recognition. Save the Node.
  2. Set the Case Nodes as follows:
    • Case Node 1:
      • Value — enter de-DE. For language recognition, use the full BCP-47 format.
    • Case Node 2:
      • Value — enter pt-PT. For language recognition, use the full BCP-47 format.
  3. Below each Case Node, add a Set Translation Node and configure as follows:
    1. Set Translation Node for German:
      • User Input Language — enter de. For translation, use the ISO 639-1 language code.
      • Flow Language — enter en.
      • (Optional) In the Settings section, enter a name to distinguish the Node in the Label field, for example, EN/DE. Save the Node.
    2. Set Translation Node for Portuguese:
      • User Input Language — enter pt. For translation, use the ISO 639-1 language code.
      • Flow Language — enter en.
      • (Optional) In the Settings section, enter a name to distinguish the Node in the Label field, for example, EN/PTBR. Save the Node.
  4. Below each Set Translation Node, add a Set Session Config Node and configure the Synthesizer (TTS) settings as follows:
    1. Set Session Config Node for German:
      • TTS Vendor — select your speech provider.
      • TTS Language — select German.
      • TTS Voice — select a voice for your AI Agent to use when speaking in German.
    2. Set Session Config Node for Portuguese:
      • TTS Vendor — select your speech provider.
      • TTS Language — select Portuguese from the TTS Language list.
      • TTS Voice — select a voice for your AI Agent to use when speaking in Portuguese.
4

(Optional) Add an LLM Prompt Node for a Human-Like Handover Message

  1. Below the Lookup Node, add an LLM Prompt Node and configure the System Prompt field as:
Based on the user's message, reply kindly and advise you are handing over to a human agent.
Save the Node.
5

Configure Handover

Below the Lookup or the LLM Prompt Node, add a Handover to Agent Node and configure the following:
  • Handover Settings section, select your handover provider from the Handover Provider list. Depending on the handover provider, you need to configure additional settings.

Configure the Agent Copilot Flow

1

Configure the Agent Copilot Grid

  1. In Build > Flows, create a Flow and give it a clear name, for example, Real-Time Translation Copilot Flow.
  2. Add a Copilot: Set Grid Node and set the Copilot Grid Configuration as follows:
{
  "grid": {
    "columns": 2,
    "rows": 2,
    "gap": 5
  },
  "tiles": {
    "next-action": {
      "x": 1,
      "y": 1,
      "rows": 1,
      "columns": 1
    },
    "transcript": {
      "x": 1,
      "y": 2,
      "rows": 1,
      "columns": 1
    }
  }
}
  1. Below the Copilot: Set Grid Node, add an LLM Prompt Node and configure it to generate the next best response for the agent based on the user’s latest input and to store the result in the Input object:
    • System Prompt — enter the following:
      Based on the user's last message, suggest the most helpful next response for a human agent.
      
    • In the Storage & Streaming section, configure the following:
      • How to handle the result — set to Store in Input.
      • Input Key to store Result — set to input.promptResult.
  2. Add a Copilot: Next Action Tile Node after the LLM Prompt Node and configure the Node as follows:
    • Tile ID: next-action
    • Text:
    {{input.promptResult.detailedResult.choices[0].message.content}}
    
  3. Add a Copilot: Transcript Tile Node and set Tile ID to transcript.
2

Create a Voice Copilot Endpoint

  1. In Deploy > Endpoints, click + New Endpoint and select Voice Copilot.
  2. In the New Endpoint section, configure the following:
    • Name — enter a unique name, for example, Real-Time Translation Copilot Endpoint.
    • Flow — select the Agent Copilot Flow you created from the list.
With this grid, the Agent Copilot workspace displays a widget with the conversation transcript and a widget with an LLM-generated next action that is translated to the user’s language when sent to the user. You can test the Real-Time Translation with Agent Copilot for voice-to-chat applications.

More Information