Updated in 2026.9 The table provides an overview of the LLM providers supported by Cognigy.AI, detailing the standard models and their compatibility with Cognigy.AI features. You can also add a custom model that isn’t listed in the table. To do so, select the Custom Model parameter when creating a model and specify both the model type and name. For more information about adding a custom or standard model for a selected LLM provider, refer to the All LLM Providers page.Documentation Index
Fetch the complete documentation index at: https://docs.cognigy.com/llms.txt
Use this file to discover all available pages before exploring further.
Models | Design-Time Features1 | AI Agent Node | AI Enhanced Outputs | LLM Prompt Node | Answer Extraction | Knowledge Search | Sentiment Analysis | NLU Embedding Model |
|---|---|---|---|---|---|---|---|---|
| Platform-Provided LLM | ||||||||
gpt-5-42gpt-5-4-mini2gpt-5-4-nano2gpt-5-22gpt-5-12gpt-52gpt-5-nano2gpt-5-mini2gpt-5-chat-latestgpt-4.1-nanogpt-4.1-mini gpt-4.1 gpt-4o-minigpt-4o | ||||||||
text-embedding-ada-002text-embedding-3-small3text-embedding-3-large3 | ||||||||
gpt-5-42gpt-5-4-mini2gpt-5-4-nano2gpt-5-22gpt-5-12gpt-52gpt-5-nano2gpt-5-mini2gpt-5-chat-latestgpt-4.1-nanogpt-4.1-mini gpt-4.1 gpt-4o-minigpt-4o | ||||||||
text-embedding-ada-002text-embedding-3-small3text-embedding-3-large3 | ||||||||
| OpenAI-compatible LLMs | ||||||||
claude-sonnet-4-62claude-sonnet-4-52claude-sonnet-4-0 | ||||||||
claude-haiku-4-52claude-3-haiku | ||||||||
gemini-2.5-pro2gemini-2.5-flash2gemini-2.5-flash-lite2gemini-2.0-flash (deprecated)gemini-2.0-flash-lite (deprecated) | ||||||||
gemini-embedding-001 | ||||||||
gemini-3.1-pro-preview2gemini-3-flash-preview2gemini-3.1-flash-lite-preview2 | ||||||||
luminous | ||||||||
luminous-embedding-1285 | ||||||||
anthropic.claude-3-5-sonnet-20240620-v1:0amazon.nova-lite-v1:0amazon.nova-pro-v1:0 | ||||||||
amazon.titan-embed-text-v2:06 | ||||||||
amazon.nova-micro-v1:0 | ||||||||
| Converse API-compatible models | Partially supported7 | |||||||
pixtral-12b-2409mistral-large-latest4mistral-medium-latest4mistral-small-latest4pixtral-large-latest4 |
More Information
1 Design-time features include Intent Sentence Generation, Flow Generation, Adaptive Card Generation, Lexicon Generation.
2 Reasoning models consume more tokens and may incur higher costs. The models are optimized for tasks that require complex problem-solving and logical reasoning. Before using these models in production, test token consumption in debug mode and use them with caution. To reduce costs, consider using a non-reasoning model such as
gpt-5-chat-latest. For more information about reasoning models, refer to the Microsoft Azure OpenAI, OpenAI, and Google documentation.Anthropic’s
claude-sonnet-4-6 model doesn’t support assistant message prefilling.
To keep your Flows compatible, ensure that the last transcript step before the LLM call is a user message. For example, add an Add Transcript Step Node before the LLM call and set the role to user.
3 For Knowledge AI, we recommend using
text-embedding-ada-002. However, if you want to use text-embedding-3-small
and text-embedding-3-large, make sure that you familiarize yourself with the
restrictions of these models in Which Model to
Choose?.
4 The
*-latest suffix indicates that the model you
select in Cognigy.AI points to the latest version of the model. For more
information, read
Anthropic’s
or Mistral
AI’s models
documentation.
5 This feature is currently in Beta, hidden behind the
FEATURE_ENABLE_ALEPH_ALPHA_EMBEDDING_LLM_WHITELIST feature flag, and may
contain issues. Only one type of embedding LLM should be used per Project. If
you choose to use luminous-embedding-128, you must create a new Project.
Once you have chosen an embedding model for a Project, you cannot switch to a
different embedding model; you must use a different Project. Failing to do so
will result in errors while this feature is in beta.
6 For Cognigy.AI 2025.10 and earlier versions, the option to select this model is hidden behind the
FEATURE_ENABLE_AWS_BEDROCK_EMBEDDING_LLM_WHITELIST feature flag.
7 Note that some models from the Converse API might not support the AI Agent Node feature.