Skip to main content
The table provides an overview of the LLM providers supported by Cognigy.AI, detailing the standard models and their compatibility with Cognigy.AI features. You can also add a custom model that is not presented in the table. To do so, select the Custom Model parameter when creating a model and specify both the model type and name. For more information about adding a custom or standard model for a selected LLM provider, refer to the All LLM Providers page.
ModelsDesign-Time Features1AI Agent NodeAI Enhanced OutputsLLM Prompt NodeAnswer ExtractionKnowledge SearchSentiment AnalysisNLU Embedding Model
cognigy-ai-logo Cognigy.AI
Platform-Provided LLMcheck-markx-markx-markx-markx-markx-markx-markx-mark
azure Microsoft Azure OpenAI
gpt-57, gpt-5-nano7, gpt-5-mini7, gpt-5-chat-latest, gpt-4.1-nano, gpt-4.1-mini , gpt-4.1 , gpt-4o-mini, gpt-4ocheck-markcheck-markcheck-markcheck-markcheck-markx-markcheck-markx-mark
text-embedding-ada-002, text-embedding-3-small2, text-embedding-3-large2x-markx-markx-markx-markx-markcheck-markx-markx-mark
Deprecated: gpt-4, gpt-3.5-turbo-instructx-markx-markx-markcheck-markcheck-markx-markx-markx-mark
Deprecated: gpt-3.5-turbo (ChatGPT)check-markx-markcheck-markcheck-markcheck-markx-markcheck-markx-mark
open-ai OpenAI
gpt-57, gpt-5-nano7, gpt-5-mini7, gpt-5-chat-latest, gpt-4.1-nano, gpt-4.1-mini , gpt-4.1 , gpt-4o-mini, gpt-4ocheck-markcheck-markcheck-markcheck-markcheck-markx-markcheck-markx-mark
text-embedding-ada-002, text-embedding-3-small2, text-embedding-3-large2x-markx-markx-markx-markx-markcheck-markx-markx-mark
Deprecated: gpt-4, gpt-3.5-turbo-instructx-markx-markx-markcheck-markcheck-markx-markx-markx-mark
Deprecated: gpt-3.5-turbo (ChatGPT)check-markx-markcheck-markcheck-markcheck-markx-markcheck-markx-mark
openai-compatibleOpenAI-Compatible
OpenAI-compatible LLMsx-markcheck-markx-markx-markcheck-markx-markx-markx-mark
anthropic Anthropic
claude-opus-4-0, claude-sonnet-4-0check-markcheck-markcheck-markcheck-markcheck-markx-markcheck-markx-mark
claude-3-haiku, claude-3-7-sonnet-latest3, claude-3-5-sonnet-latest3, Deprecated: claude-3-opusx-markcheck-markx-markcheck-markcheck-markx-markx-markx-mark
Deprecated: claude-v1-100k, claude-instant-v1x-markx-markx-markcheck-markcheck-markx-markx-markx-mark
google-gemini Google Gemini
gemini-2.5-pro7, gemini-2.5-flash7, gemini-2.5-flash-lite7, gemini-2.0-flash, gemini-2.0-flash-litex-markcheck-markx-markcheck-markcheck-markx-markx-markx-mark
alephalpha Aleph Alpha
luminousx-markx-markx-markx-markcheck-markx-markx-markx-mark
luminous-embedding-1284x-markx-markx-markx-markx-markcheck-markx-markx-mark
amazon-bedrock Amazon Bedrock
anthropic.claude-3-5-sonnet-20240620-v1:0, amazon.nova-lite-v1:0, amazon.nova-pro-v1:0x-markcheck-markx-markcheck-markcheck-markx-markx-markx-mark
amazon.titan-embed-text-v2:05x-markx-markx-markx-markx-markcheck-markx-markx-mark
amazon.nova-micro-v1:0x-markx-markx-markcheck-markcheck-markx-markx-markx-mark
Converse API-compatible modelsx-markPartially supported6x-markx-markcheck-markx-markx-markx-mark
mistral Mistral AI
pixtral-12b-2409, mistral-large-latest3, mistral-medium-latest3, mistral-small-latest3, pixtral-large-latest3x-markcheck-markx-markcheck-markcheck-markx-markx-markx-mark

More Information


1 Design-time features include Intent Sentence Generation, Flow Generation, Adaptive Card Generation, Lexicon Generation.
2 For Knowledge AI, we recommend using text-embedding-ada-002. However, if you want to use text-embedding-3-small and text-embedding-3-large, make sure that you familiarize yourself with the restrictions of these models in Which Model to Choose?.
3 The *-latest suffix indicates that the model you select in Cognigy.AI points to the the latest version of the model. For more information, read Anthropic’s or Mistral AI’s models documentation.
4 This feature is currently in Beta, hidden behind the FEATURE_ENABLE_ALEPH_ALPHA_EMBEDDING_LLM_WHITELIST feature flag, and may contain issues. Only one type of embedding LLM should be used per Project. If you choose to use luminous-embedding-128, you must create a new Project. Once you have chosen an embedding model for a Project, you cannot switch to a different embedding model; you must use a different Project. Failing to do so will result in errors while this feature is in Beta.
5 For Cognigy.AI 2025.10 and earlier versions, the option to select this model is hidden behind the FEATURE_ENABLE_AWS_BEDROCK_EMBEDDING_LLM_WHITELIST feature flag.
6 Note that some models from the Converse API might not support the AI Agent Node feature.
7 Reasoning models consume more tokens and may incur higher costs. The models are optimized for tasks that require complex problem-solving and logical reasoning. Before using these models in production, test token consumption in debug mode and use them with caution. To reduce costs, consider using a non-reasoning model such as gpt-5-chat-latest. For more information about reasoning models, refer to the Microsoft Azure OpenAI, OpenAI, and Google documentation.