Cognigy

Cognigy.AI Docs

COGNIGY.AI is the Conversational AI Platform focused on the needs of large enterprises to develop, deploy and run Conversational AIโ€™s on any conversational channel.

Given the arising need of voice interfaces as the most natural way of communicating with brands, Cognigy was founded in 2016 by Sascha Poggemann and Phil Heltewig. Our mission: to enable all devices and applications to intelligently communicate with their users via naturally spoken or written dialogue.

Get Started

Description

A Say Node is used to send a message to the user.

Depending on the current Channel, additional rich media formats are available. Add a new channel output by clicking the "+" icon and selecting the channel that corresponds to the channel endpoint that will be deployed.

The say node menu with all channel output types enabled.The say node menu with all channel output types enabled.

The say node menu with all channel output types enabled.

If there is a Channel-specific configuration for the current Channel, this configuration will be used instead of the one on the default section.

AI (default channel)

๐Ÿ‘

Automatic Conversion to Channel Specific Output

In case rich media is configured in the default AI tab, the platform will attempt to automatically convert the output to the channel's equivalent. Please check the specific Output Type above to verify channel support.

๐Ÿšง

Fallback Text

In case the automatic conversion to channel specific output cannot take place, the Fallback Text will be triggered.

Output Types


The AI Channel allows for the configuration of different Output Types:

Text

The Text Output Type renders text and emojis (if supported by the channel). The text field also supports CognigyScript and Tokens that can be added by clicking the AI button at the end of each field.

๐Ÿ‘

Channel Support

The Text Output Type is currently converted to compatible output on all channels. Please keep in mind that emojis may not render properly on all channels.

Multiple text messages can be added for conversational variation. When multiple text messages are configured, the delivery order is controlled by the linear and loop settings available in the options dropdown menu.

๐Ÿšง

Using Multiple Text Outputs

By configuring multiple messages in a text say node, only one message will be delivered per activation of the node. An additional say node must be configured in order to send two text messages at once.

Text Options

When sending simple text output, Cognigy.AI dialog nodes provide options for configuring the behavior of output and attaching data to the message. The configuration options and their functions are listed below:

Parameter

Type

Description

Linear

toggle

Iterates through the text options linearly instead of randomly.

Loop

toggle

If linear is set, the order restarts at the first text response after reaching the end. Otherwise the last text option will be repeatedly used, once reached.

Data

JSON

The data you want to send to the client

๐Ÿ“˜

Linear and Loop

There are three different combinations of applying the Linear and Loop toggles, each providing a different behavior style for the order in which messages are delivered on future activation of the same node. The three combinations are:

  • Random (e.g. 4, 2, 5, 4, 4, 2, 5, 1, ...)
  • Linear + non-looping (e.g. 1, 2, 3, 4, 5, 5, 5, 5...)
  • Linear + looping (e.g. 1, 2, 3, 4, 5, 1, 2, 3, 4, 5...)

Text with Quick Replies

Text with Quick Replies can be used to show the user a number of configurable Quick Replies. Quick Replies are pre-defined answers that are rendered as input chips.

The click action can be configured to be Phone Number or Send Postback.

Postback Value

When a Postback Value is configured and the Quick Reply is clicked, the Postback Value will be sent to the start of the Flow. This simulates user input - it is "as if" the user would have manually typed something. This is the most typical behavior for Quick Replies.

Phone Number

When this option is configured, clicking the Quick Reply will try to open the phone application on the device.

Trigger Intent

The Trigger Intent feature allows you to manually trigger an Intent by writing cIntent:, followed by the desired intent name in your text input, the regular Intent mapping will be ignored.
More information see Trigger Intent

๐Ÿšง

Channel Support

The Text with Quick Replies Output Type is currently converted to compatible output on all the following channels: Webchat, Messenger, Google Actions, Line, Azure Bot Service, Sunshine Conversations, Slack and RingCentral Engage.

Gallery

Galleries are powerful visual widgets that are ideal for showing a list of options with images. They are typically used to show a number of products or other items that can be browsed.

A gallery can be configured with a number of cards. A card contains an image, a title and a subtitle and can be configured with (optional) buttons.

๐Ÿšง

Channel Support

The Gallery Output Type is currently converted to compatible output on all the following channels: Webchat, Messenger, Line, Azure Bot Service, Sunshine Conversations, Slack and RingCentral Engage.

Text with Buttons

Text with Buttons is a similar Output Type to Text with Quick Replies. The difference comes from the way the widget is rendered, which resembles a vertical list of button options. It can be configured in a similar fashion.

๐Ÿšง

Channel Support

The Text with Buttons Output Type is currently converted to compatible output on all the following channels: Webchat, Messenger, Google Actions, Line, Azure Bot Service, Sunshine Conversations, Slack and RingCentral Engage.

List

List output allows a customized list of items to be displayed with many configuration options such as the header image, buttons, images and more.

The first list item can optionally be converted to a header item that houses the list title, subtitle and button. Each additional list item can have a title, subtitle, image and button added. The list can also have a button added at the bottom.

๐Ÿšง

Channel Support

The List Output Type is currently converted to compatible output on all the following channels: Webchat, Azure Bot Services, RingCentral Engage, UserLike and Line.

Audio

The Audio Output Type can render audio output in case this is supported by the channel. It can be configured, by providing it with a URL to an audio file.

๐Ÿšง

Channel Support

The Audio Output Type is currently converted to compatible output on all the following channels: Webchat, Messenger, Line, Azure Bot Service, Sunshine Conversations, Slack.

Image

Image Output Types display an image in a similar fashion to the gallery. The image output, however, only outputs one particular image.

๐Ÿšง

Channel Support

The Image Output Type is currently converted to compatible output on all the following channels: Webchat, Messenger, Line, Azure Bot Service, Sunshine Conversations, Slack.

Video

The Video Output Type allows you to configure a video output. It takes a URL as an input parameter and will start playing the video automatically if this is supported by the channel.

๐Ÿšง

Channel Support

The Video Output Type is currently converted to compatible output on all the following channels: Webchat, Messenger, Line, Azure Bot Service, Sunshine Conversations, Slack.

PLEASE NOTE: the Messenger channel requires videos to be of the MP4 type (e.g. Youtube links might not work).

๐Ÿ“˜

CognigyScript

Any text field in the say node supports the use of CognigyScript.
For detailed instructions, read the Chapter on CognigyScript.

Alexa

Defines what an Amazon Echo enabled system will say as an answer.

SSML Editor


In addition to regular text output, Alexa supports SSML which enables the admin to define the way the output is pronounced.

๐Ÿšง

Multiple voice outputs

If more than one Say Node is hit in one Flow execution using the Alexa Channel, each Say Nodes' SSML (or text) outputs will be concatenated and sent as one large SSML statement.

๐Ÿ“˜

CognigyScript in SSML

You can also use CognigyScript expressions within SSML parameters.

Home Cards


๐Ÿšง

Multiple Cards

If more than one Card is being output during one Flow execution, only the last one will be sent.

Defines an optional additional Card that is available to the user through a connected Alexa app. They can be used to provide additional information that is not perceptible without a screen.

The following Card Templates are available:

  • Text
  • Text & Image
  • Link Account

Display Templates for Echo Show


๐Ÿšง

Multiple Displays

If more than one Display configuration is being output during one Flow execution, only the last one will be sent.

Defines content that will be shown on Amazon Echo Show devices.

The following Display Templates are available:

  • Full-width Text
  • Text & Image right
  • Text & Image left
  • Image & Text overlay
  • Vertical List
  • Horizontal List

Custom JSON Directives


Instead of going with the WYSIWYG approach, you may also define a directive manually using a CognigyScript-enabled JSON field.

For further details see the Amazon Alexa Documentation.

Messenger

Defines Templates that can be displayed in a special way in the Facebook Messenger Channel.

The following Facebook Messenger Templates are available:

  • Text & Quick Replies
  • Buttons
  • Gallery
  • Attachment
  • List

๐Ÿ‘

Output any Facebook JSON

Instead of using the UI functions provided by Cognigy, you can also output arbitrary JSON by selecting Custom JSON as the Type. This lets you see the JSON you compiled through Cognigy and modify it or add to it.

๐Ÿ‘

Using Code Nodes to output Facebook Markup

You can use the output action in Code Nodes to send JSON directly to Facebook. To do that, set the following code as the data property:

{
    "_cognigy": {
        "_facebook": {      
          "message": {
            // this contains your message to facebook
          }
        }
    }
}

See more under Code Nodes

โ—๏ธ

Location Button Deprecation

The Quick Reply Button "Location" to send a users location has been deprecated by Facebook Messenger and is no longer available. Please remove it if you have it in an older Flow, as Facebook Messenger will reject the full message if there is still a location quick reply defined.

Google

If you want to output a card, list, ssml or any other advanced options on the Google Assistant, then you can use the Google Actions tab.

Keep Session Open


This toggle defines whether the session should be kept open or whether it should end after this output. This should be turned off when you wish to end the conversations and turned on otherwise.

๐Ÿšง

Multiple Keep Session Open Values

If more than one Say Node is hit during Flow Execution, then the Keep Session Open value of the last output is used

SSML editor


With our Google Actions SSML Editor you're able to build your Google Asisstant output speech by either entering SSML markup or by using our SSML markup templates (see figure below).

SSML Markup TemplatesSSML Markup Templates

SSML Markup Templates

๐Ÿšง

Multiple Voice Outputs

If more than one Say Node is hit in one Flow execution using the Google Channel, each Say Nodes' SSML (or text) outputs will be concatenated and sent as one large SSML statement.

๐Ÿ“˜

Simple Response Text

Instead of writing SSML in the SSML editor, you can also enter your text in the default tab. As long as there is no content in the Google Actions SSML editor, the text from the default tab text field will be used.

For further details see the Google Actions Documentation

Display - Rich Response


Type Description
Basic Card Displays information that can include a title, sub-title, a description, an image and a button

Limitations

  • Requires Additional Text Output (e.g. Output Speech)
  • Description: 10 lines with an image, 15 lines without an image
Media Response Plays audio content

Limitations:

  • Requires Additional Text Output (e.g. Output Speech)
  • Must include Suggestion Chips if the session is kept open
  • Only supports .mp3 format
  • Media file URL has to support HTTPS
Browsing Carousel A Carousel that displays web content

Limitations:

  • Requires Additional Text Output (e.g. Output Speech)
  • Min. 2 tiles
  • Max. 10 tiles
  • Tiles must link to a web content
  • All Tiles must have the same components

๐Ÿšง

Multiple Rich Responses

If more than one Rich Response is being output during one Flow execution, only the last one will be sent.

For details and requirements see the Google Actions Documentation

Display - Suggestions


A suggestion/chip is used to point the conversation to a defined direction.

๐Ÿšง

Multiple Suggestions

If more than one output contains Suggestions during one Flow execution, only the last one will be sent.

Limitations

  • you can add a maximum of 8 chips to a response
  • a suggestion/chip can contain text with a maximum length of 20 characters

Custom Google Action Response JSON


Within the custom JSON field you are able to define complex responses. Please visit the Google Actions Documentation for further details.

Webchat

The Webchat Channel features the configuration options of our Facebook integration.

As the output format is the same, you can configure the Webchat Channel to use your output from the Facebook tab or manually override it for Webchat-specific customization.

The Webchat will render HTML markup for outputs from the DEFAULT tab's text as well as the text field from the Webchat tab's "Text + Quick Replies" template.

LINE

The LINE tab provides two methods for creating and editing a message which is only for the LINE channel:

  • Text for sending text message responses
  • Custom JSON for defining more complex messages and templates
Type Description
Text A simple text message
Custom JSON Can contain a valid LINE message object. See the Line Documentation for further details and templates.

Twilio

Type Description
Text A simple text message
TwiML Can contain valid TwiML. See the Twilio Documentation for further details and templates.

๐Ÿšง

Validate your TwiML

Please make sure that the TwiML you enter in the editor is valid. If the TwiML sent to Twilio is invalid, the call will immediately fail or not be able to initiate.

You will also have to make sure that the content of your TwiML is escaped XML.

Amazon Polly Voice


In the endpoint editor of your Twilio Endpoint you can select the Amazon Polly voice. Polly has some features which are listed in Twilio's documentation, see:

:link: Twilio Amazon Polly

Twilio SMS

Type Description
Text A simple text message
TwiML Can contain valid TwiML. See the Twilio Documentation for further details and templates.

๐Ÿšง

Validate your TwiML

Please make sure that the TwiML you enter in the editor is valid. If the TwiML sent to Twilio is invalid, the call will immediately fail or not be able to initiate.

You will also have to make sure that the content of your TwiML is escaped XML.

Microsoft Teams

๐Ÿ“˜

Teams Cards

Structured content in Microsoft Teams is sent as so called Cards. Please refer to our Deploy a Microsoft Teams Endpoint page for information on how to send messages.

Type Description
Text A simple text message
JSON Can contain valid JSON in the Bot Framework / Microsoft Teams format. See the Microsoft Documentation for further details and templates.

๐Ÿšง

Multiple Flow Outputs

If more than one Say Node is hit in one Flow execution using the Microsoft Teams Channel, each Say Nodes' Default Text or Microsoft Teams Text outputs will be concatenated and sent as one message. However, if one or more of the Say Nodes contain Microsoft Teams JSON, then the last node containing this JSON will be sent.

URL opening options in the existing browser tab in Webchat Widget

As of release v4.5, you can decide to open a URL in the same Webchat Widget window or in a new one when using Say Node option โ€žText with Buttonsโ€œ, โ€žGalleryโ€œ or โ€žListโ€œ.

Say Node example using "Text with Buttons"

  1. Create a Flow with Say Node.
  2. Start the Say Node Editor, select the โ€žText with Buttonsโ€œ option and click on โ€žAdd a new Buttonโ€œ.
  3. Then select โ€žURLโ€œ as 'Button Type' and enter the Internet address you want.
  4. Complete the configuration by selecting โ€žOpen URL in a new tabโ€œ or โ€žOpen URL in the same tabโ€œ.
  5. Deploy an Endpoint with the Webchat option and start a bot conversation.
    The configured Say Node Buttons will be displayed in the Webchat Widget.
  6. Click on the Button.
  7. Depending on the configuration the URL will be opened in a separate tab or in the same tab.

Say Node configuration with 'URL Target' setting "Open URL in the same tab".

Webchat with button "Show URL in same tab".

URL has been opened in the same tab.

Say Node configuration with 'URL Target' setting "Open URL in new tab".

Webchat with button "Show URL in new tab".

URL has been opened in a new tab.

Updated about a month ago


Say


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.