Skip to main content
To start using a Microsoft Azure OpenAI model with Cognigy.AI features, follow these steps:
  1. Add a Model
  2. Apply the Model

Add Models

You can add a model using one of the following interfaces:

Add Models via GUI

You can add a model provided by Azure OpenAI to Cognigy.AI in Build > LLM. To add the model, you will need the following parameters:
  • Standard Model
  • Custom Model
ParameterOptionDescription
Connection TypeSelect one of the following authorization methods: API Key or OAuth2.
API KeyEnter the Azure API Key. This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2.
OAuth2This method is hidden behind the FEATURE_ENABLE_OAUTH2_AZURE_CONNECTION_WHITELIST feature flag. You can use any OAuth2-compliant provider. OAuth 2.0 offers more control and security than API keys by allowing specific permissions, expiring tokens, and reducing exposure through short-lived tokens instead of constant client secret use. To use this type of connection, fill in the following fields:
  • Client ID — add the Application (client) ID assigned to your app. This ID can be found in the Microsoft Entra admin center: go to App Registrations, select an app, then click Overview.
  • Client Secret — add the application secret created in the Certificates & secrets section in the Microsoft Entra admin center: go to App Registrations, select an app, then click Overview.
  • Oauth Url — add the URL to retrieve the access token. This URL points to the /token endpoint on the authorization server. The URL can have any format that returns an access token.
  • Scope — add a list of scopes for user permissions, for example, urn:grp:chatgpt.


Cognigy.AI uses the following steps to make requests with OAuth 2.0 to an Azure OpenAI model:
  1. Requesting a token. Cognigy.AI sends an HTTP POST request to Oauth Url with the following parameters: Client ID, Client Secret, and Scope.
  2. Receiving the token. The authorization server validates the request and returns an access token to Cognigy.AI.
  3. Using the token. When making requests to the Azure OpenAI model, Cognigy.AI includes the received token in the request instead of using an API key.
Resource NameEnter the resource name. To find this value, go to the Microsoft Azure home page. Under Azure services, click Azure OpenAI. In the left-side menu, under the Azure AI Services section, select Azure OpenAI. Copy the desired resource name from the Name column.
Deployment NameEnter the deployment name. To find this value, go to the Microsoft Azure home page. Under Azure services, click Azure OpenAI. In the left-side menu, under Azure AI Services, select Azure OpenAI. Select a resource from the Name column. On the resource page, go to Resource Management > Model deployments. On the Model deployments page, click Manage Deployments. On the Deployments page, copy the desired deployment name from the Deployment name column.
API VersionEnter the API version. The API version to use for this operation in the YYYY-MM-DD format. Note that the version may have an extended format, for example, YYYY-MM-DD-preview.
Custom URLThis parameter is optional. You can use it to route connections between your clusters and the Azure OpenAI provider through dedicated proxy servers for added security. Specify the URL based on the connection type you selected:
  • API Key — use the pattern https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>/<model-type>?api-version=<api-version>.
  • OAuth2 — the custom URL can be any valid endpoint URL. The URL must be complete, including any required query parameters.
When you add a custom URL, the Resource Name, Deployment Name, and API Version fields will be ignored.

Deprecation of Old Connections for Microsoft Azure OpenAI

In recent releases, we have updated the connection settings to Microsoft Azure OpenAI models. If you have old connections (for example, created in the 4.53 release) to Azure OpenAI, these connections have an AzureOpenAIProvider type and are marked with a Deprecated label. Although these connections can still be active, we strongly recommend creating a model with the new AzureOpenAIProviderV2 type, as old connection types will no longer be available in the future. Note that for some Microsoft Azure OpenAI models, such as text-embedding-ada-002 for knowledge search features, you might encounter the following error when an LLM is triggered: Error while performing knowledge search. Remote returned error: Search failed: Could not fetch embeddings due to missing API resource name for Azure OpenAI. To resolve the issue, recreate the model and the connection so that both are updated to the latest format.

Apply the Model

To apply a model, follow these steps:
  1. In the left-side menu of the Project, go to Manage > Settings.
  2. Go to the section based on your use case for using a model:
    • Generative AI Settings. In the Generative AI Settings section, activate Enable Generative AI Features. This setting is toggled on by default if you have previously set up the Generative AI credentials.
    • Knowledge AI Settings. Use this section if you need to add a model for Knowledge AI. Select a model for the Knowledge Search and Answer Extraction features. Refer to the list of standard models and find the models that support these features.
  3. Navigate to the desired feature and select a model from the list. If there are no models available for the selected feature, the system will automatically select None. Save changes.

More Information

I