Skip to content

Generative AI

Version badge

Note

You are subject to the terms of the Generative AI model providers to which you are connecting. Cognigy cannot take responsibility for your use of third-party services, systems, or materials.

Generative AI refers to a type of artificial intelligence that creates new, original content, such as images, video, audio, and text, using machine learning algorithms. It works by learning from existing data and producing new content based on that learning.

Cognigy.AI integrates with various LLM Providers to enable Generative AI functionality. This functionality is broadly classified as using Large Language Models (LLMs) to:

Prerequisites

Before using this feature, you need to create an account in one of the LLM Providers:

Set up Generative AI

To set up the connection between Cognigy.AI and the Generative AI Provider, do the following:

Add a Model

  1. Open the Cognigy.AI interface.
  2. Go to Build > LLM.
  3. Click +New LLM.
  4. In the New LLM window, select a model from the Model Type list.
  5. Add a unique name and description for your model.
  6. From the Provider list, select an LLM's provider:

    6.1 From the Model list, select a model presented in the list or add a custom model that is not listed. If you select Custom Model, configure the following fields:
    - Model Type — select Chat for models that support the Chat Completions API,Completion for the Completions API, or Embedding for the Embedding API. For more information, refer to the Azure OpenAI documentation.
    - Model Name — specify an ID of the model that you want to use as a custom. To find model IDs, refer to the Azure OpenAI documentation.
    6.2 Click Save.
    6.3 On the LLM Editor page, go to the Generative AI Connection field.
    6.4 On the right side of the field, click +.
    6.5 In the Connection name, enter a unique name for your connection.
    6.6 From the Connection Type list, select one of the following authorization methods:
    - API Key — add an Azure API Key. This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2.
    - OAuth2 — this method is experimental, hidden behind the FEATURE_ENABLE_OAUTH2_AZURE_CONNECTION_WHITELIST feature flag, and may encounter some issues. Add credentials for the OAuth 2.0 authorization code flow. OAuth 2.0 offers more control and security than API keys by allowing specific permissions, expiring tokens, and reducing exposure through short-lived tokens instead of constant client secret use. To use this type of connection, fill in the following fields:
        - Client ID — add the Application (client) ID assigned to your app, can be found in the in Azure AI app registration overview.
        - Client Secret — add the application secret created in the Certificates & secrets section of the Azure AI app registration portal.
        - OAuth URL — add the URL to retrieve the access token. The URL should be in the https://<your-domain>.com/as/token.oauth2 format.
        - Scope — add a list of scopes for user permissions, for example, urn:grp:chatgpt.
    6.7 Click Create.
    6.8 Fill in the remaining fields:
    - Resource Name — add a resource name. To find this value, go to the Microsoft Azure home page. Under Azure services, click Azure OpenAI. In the left-side menu, under the Azure AI Services section, select Azure Open AI. Copy the desired resource name from the Name column.
    - Deployment Name — add a deployment name. To find this value, go to the Microsoft Azure home page. Under Azure services, click Azure OpenAI. In the left-side menu, under the Azure AI Services section, select Azure Open AI. Select a resource from the Name column. On the resource page, go to Resource Management > Model deployments. On the Model deployments page, click Manage Deployments. On the Deployments page, copy the desired deployment name from the Deployment name column.
    - Api Version — add an API version. The API version to use for this operation in the YYYY-MM-DD format. Note that the version may have an extended format, for example, YYYY-MM-DD-preview.
    - Custom URL — this parameter is optional. To control the connection between your clusters and the Azure OpenAI provider, you can route connections through dedicated proxy servers, creating an additional layer of security. To do this, specify the URL in the following pattern:https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>/<model-type>?api-version=<api-verson>. When a Custom URL is added, the Resource Name, Deployment Name, and API Version fields will be ignored.

    6.1 From the Model list, select a model presented in the list or add a custom model that is not listed. If you select Custom Model, configure the following fields:
    - Model Type — select Chat for the https://api.openai.com/v1/chat/completions API, Completion for the https://api.openai.com//v1/completions API, and Embedding for the https://api.openai.com//v1/embeddings API. For more information, refer to the OpenAI Text Generation Models documentation.
    - Model Name — specify a name of the model that you want to use as a custom. To find model names, refer to the Azure OpenAI Service models documentation.
    6.2 Click Save.
    6.3 On the LLM Editor page, go to the Generative AI Connection field.
    6.4 On the right side of the field, click +.
    6.5 Fill in the following fields:
    - Connection name — create a unique name for your connection.
    - apiKey — add an API Key from your OpenAI account. You can find this key in the User settings of your OpenAI account.
    6.6 Click Create.

    6.1 From the Model list, select a model presented in the list or add a custom model that is not listed. If you select Custom Model, configure the following fields:
    - Model Type — select Chat for models that support the Messages API, Completion for the Completions API, or Embedding for the Embeddings API. For more information, refer to the Anthropic Model Comparison (API format) documentation.
    - Model Name — specify a name of the model that you want to use as a custom. To find model names, refer to the Anthropic documentation.
    6.2 Click Save.
    6.3 Fill in the following fields:
    - Connection name — create a unique name for your connection.
    - apiKey — add an API Key that you generated via Account Settings in Anthropic.
    6.4 Click Create.

    6.1 From the Model list, select a model presented in the list or add a custom model that is not listed. If you select Custom Model, configure the following fields:
    - Model Type — select the Completion type.
    - Model Name — specify a name of the model that you want to use as a custom. To find model names, refer to the Google Vertex AI documentation. Note that, within this connection, Cognigy supports only the text-bison models.
    6.2 Click Save.
    6.3 Fill in the Connection name field by specifying a unique name for your connection.
    6.4 To upload the JSON file with a key for your model, you need to obtain this key. Go to the Google Cloud console and find Vertex AI via the search bar.
    6.5 On the Vertex AI page, click the Enable All Recommended APIs button to activate an API connection, if it is not activated. Ensure that the Vertex AI API is enabled.
    6.6 In the left-side menu, go to the IAM & Admin > Service Accounts.
    6.7 Select Actions and click Manage Keys.
    6.8 On the Keys page, select Add Key and click Create new Key.
    6.9 In the appeared window, select the JSON key type and click Create. The file will be downloaded.
    6.10 In Cognigy, in the New Connection window, click Upload JSON file and upload the file.
    6.11 Click Create.
    6.12 Fill in the remaining fields:
    - Location — add a region for the model. For example, us-central1.
    - API Endpoint — add a service endpoint for the model. For example, us-central1-aiplatform.googleapis.com. Note that the endpoint should be specified without https:// or http://.
    - Publisher — add an owner's name of the model. If not specified, Google will be used by default. This parameter is optional.

    6.1 From the Model list, select a model presented in the list or add a custom model that is not listed. If you select Custom Model, configure the following fields:
    - Model Type — select the Chat type.
    - Model Name — specify a name of the model that you want to use as a custom. To find model names, refer to the Google Gemini documentation. Note that, within this connection, Cognigy supports only the gemini models.
    6.2 Click Save.
    6.3 Fill in the Connection name field by specifying a unique name for your connection.
    6.4 To upload the JSON file with a key for your model, you need to obtain this key. If you have previously used a key for the Google Vertex AI connection, you can also use this key for Google Gemini; proceed to step 6.10 to add the key. If you're setting up the connection for the first time, go to the Google Cloud console and find Vertex AI via the search bar.
    6.5 On the Vertex AI page, click the Enable All Recommended APIs button to activate an API connection, if it is not activated. Ensure that the Vertex AI API is enabled.
    6.6 In the left-side menu, go to the IAM & Admin > Service Accounts.
    6.7 Select Actions and click Manage Keys.
    6.8 On the Keys page, select Add Key and click Create new Key.
    6.9 In the appeared window, select the JSON key type and click Create. The file will be downloaded.
    6.10 In Cognigy, in the New Connection window, click Upload JSON file and upload the file.
    6.11 Click Create.
    6.12 In the Location field, add a region for the model. For example, us-central1.

    6.1 From the Model list, select a model presented in the list or add a custom model that is not listed. If you select Custom Model, configure the following fields:
    - Model Type — select the Completion type.
    - Model Name — specify a name of the model that you want to use as a custom. To find model names, refer to the Aleph Alpha documentation.
    6.2 Click Save.
    6.3 Fill in the following fields:
    - Connection name — create a unique name for your connection.
    - Token — specify a key that you created in your Aleph Alpha account.
    6.4 Click Create.
    6.5 Fill in the remaining field:
    - Custom URL — this parameter is optional. To control the connection between your clusters and the Aleph Alpha provider, you can route connections through dedicated proxy servers, creating an additional layer of security. To do this, specify the base URL. For example, https://api.aleph-alpha.com.

    6.1 From the Model list, select Custom Model and configure the following fields:
    - Model Type — select the Chat type for models that support the Converse API. Note that the model will only work if your AWS admin gives you access to this model.
    - Model Name — specify an ID of the model that you want to use as a custom. To find model IDs, refer to the Amazon Bedrock documentation.
    6.2 Click Save.
    6.3 Fill in the following fields:
    - Connection name — create a unique name for your connection.
    - Access Key ID — specify an Access Key ID. Log in to the AWS Management Console, go to the IAM dashboard, select Users, and choose the IAM user. Navigate to the Security credentials tab, and under Access keys, create a new access key if one hasn't been created. Copy the Access Key ID provided after creation.
    - Secret Access Key — specify a Secret Access Key. After creating the access key, you'll be prompted to download a file containing the Access Key ID and the Secret Access Key. Alternatively, you can retrieve the Secret Access Key by navigating to the IAM dashboard, selecting the user, going to the Security credentials tab, and clicking Show next to the Access Key ID to reveal and copy the Secret Access Key.
    6.4 Click Create.
    6.5 Fill in the remaining field:
    - Region — enter the AWS region where your model is located, for example, us-east-1 for the US East (N. Virginia) region.

  7. To apply changes, click Save.

  8. To check if the connection was set up, click Test.

When the model is added, you will see it in the list of models.

To apply this model for Cognigy.AI features, go to the settings by clicking Manage LLM Features.

Apply the Model

To apply a model, follow these steps:

  1. Open the Cognigy.AI interface.
  2. In the left-side menu, click Manage > Settings.
  3. Go to the section based on your use case for using a model:
    • Generative AI Settings. In the Generative AI Settings section, activate Enable Generative AI Features. This setting is toggled on by default if you have previously set up the Generative AI credentials.
    • Knowledge AI Settings. Use this section if you need to add a model for Knowledge AI. Select a model for the Knowledge Search and Answer Extraction features. Refer to the list of standard models and find the models that support these features.
  4. Navigate to the desired feature and select a model from the list. If there are no models available for the selected feature, the system will automatically select None.
  5. Click Save.

You can check if the connection works by creating a new Generative AI Flow.

Design-Time Generative AI Features

During the design phase of creating AI Agents, LLMs can be used to generate a variety of AI Agent resources:

To know more about the benefits of integrating Conversational AI with Generative AI platforms, watch this webinar:

Generate Lexicons

Note that the generation of Lexicons for primary NLU languages besides German and English is not fully supported.

To use Generative AI technology for creating a new Lexicon, do the following:

  1. In the left-side menu of the Cognigy.AI interface, click Build > Lexicons.
  2. Click + New Lexicon.
  3. In the New Lexicon window, specify the name covering the Lexicon's general main topic and add a relevant description. It helps generate a more accurate result. To ensure that the generated content meets the desired expectations, fill in both fields. Relying solely on the title without using the description field will not produce the intended results.
  4. Go to the Lexicon Entry Generation setting, and activate Generate Lexicon Entries.
  5. Select Lexicon language from the list.
  6. Set the number of entries (lexicon units).
  7. (Optional) Add the default Slot.
  8. (Optional) Activate Generate Synonyms. Synonyms help AI Agents understand and recognize different variations of the same concept. Up to five synonyms will be generated for each keyphrase.
  9. Click Create.

When the Lexicon Editor with new keyphrases is opened, you can edit, delete, or add new ones manually.

Generate Flows

Note that the generation of Flows for primary NLU languages besides German and English is not fully supported.

To use Generative AI technology for creating a new Flow with pre-configured Nodes based on your scenario, do the following:

  1. In the left-side menu of the Cognigy.AI interface, click Build > Flows.
  2. Click + New Flow.
  3. In the New Flow window, go to the Flow Generation section and select one of the options:
    • None — the Generative AI will not be applied to this Flow. This setting is activated by default.
    • Name and Description — the Generative AI will use the Name and Description fields for generating Flow.
    • Name and Transcript — the Generative AI will use the Name and Transcript fields for generating Flow. For this setting, you need to create a scenario and put it in the Transcript field. Use the Transcript field template as an example for your scenario.
  4. Generate the Flow by clicking Create.

In the existing Flow, you can edit Nodes created based on your scenario. Generate new Intent sentences or responses for a chatbot.

Generate Intent Sentences

Note that the generation of Intent sentences for primary NLU languages besides German and English is not fully supported.

To use Generative AI technology for creating Intent example sentences, do the following:

  1. Open the existing Flow.
  2. In the upper-right corner of the Flow Editor page, select NLU.
  3. On the Intent tab, click Create Intent.
  4. Specify a unique name for the Intent and add a relevant description. It helps generate a more accurate result. To ensure that the generated content meets the desired expectations, fill in both fields. Relying solely on the title without using the description field will not produce the intended results.
  5. Activate the Generate Example Sentences setting.
  6. Set the number of generative sentences.
  7. Generate new sentences by clicking Create.
  8. Click Build Model.

If you want to add more examples automatically, use the Generate Sentences button. New sentences will be marked in light blue. The system generates specific numbers of sentences. Save changes and build your model again.

You can also use Generative AI in the Node configuration.

Run-Time Generative AI Features

In Cognigy.AI, the Run-Time Generative AI features enrich AI Agents with dynamic interactions using LLMs. These features include running prompts, orchestrating conversations, rephrasing outputs, dynamic question reprompts, and generative knowledge retrieval.

You can configure the following Run-Time Generative AI features:

LLM Prompt Node

The LLM Prompt Node enables you to run a prompt against an LLM and either output the message or store it in the Input or Context objects.

GPT Conversation Node

Warning

This Node is part of Cognigy's large-language-model research efforts and is intended solely as a preview feature. The GPT Conversation Node is not intended for production use. The GPT Conversation Node is deprecated and can no longer be created in Cognigy.AI v4.85 and later. Use the LLM Prompt Node to generate messages with LLM services.

The GPT Conversation Node enables an LLM to orchestrate a complete conversation, including determining the next best action and outputting relevant messages to the customer.

Rephrasing AI Agent Outputs

To use AI-enhanced bot output rephrasing in Say, Question, and Optional Question Nodes, do the following:

  1. Open the existing Flow.
  2. Add one of the Nodes: Say, Question, or Optional Question.
  3. Go to the AI-enhanced output section.
  4. In the Rephrase Output setting, select one of the options:
    • None — the Generative AI will not be applied to this Node. This setting is activated by default.
    • Based on Custom Input — specify custom values for the Input. Use the Custom Inputs field that allows the bot developer to input information for contextualizing and rephrasing the output.
    • Based on previous user inputs — set the last x user Inputs considered.
  5. Set the score in the Temperature setting. The temperature range determines the extent of variation in Generative AI's response.
  6. Click Save Node.

Check in the Interaction Panel if your Flow works as expected.

LLM-powered Question Reprompts

The Question Node includes a feature to output a prompt to the user when they have answered a question incorrectly. Instead of using static text, you can use LLMs to generate a more dynamic and personalized output.

Search Extract Output Node

The Search Extract Output Node uses Cognigy Knowledge AI to execute a search within a Knowledge Store, extracts a relevant answer via a Generative AI model, and creates an output.

LLM Entity Extract Node

The LLM Entity Extract Node retrieves specific entities from user inputs, such as product or booking codes and customer IDs. For example, if a user input is I would like to book flight AB123 from New York to London, the LLM can extract the booking code AB123 from this input.

FAQ

Q1: Can I use my OpenAI free account for the Generative AI feature in Cognigy.AI?

A1: A paid account is required to get an API Key, which is necessary for using Generative AI. A free account does not provide this key.

Q2: Why doesn't Generative AI work with AudioCodes Nodes?

A2: Generative AI output supports only text messages in the AI channel.

More Information