| Models | Design-Time Features1 | AI Agent Node | AI Enhanced Outputs | LLM Prompt Node | Answer Extraction | Knowledge Search | Sentiment Analysis | NLU Embedding Model |
|---|---|---|---|---|---|---|---|---|
| Platform-Provided LLM | ||||||||
gpt-57, gpt-5-nano7, gpt-5-mini7, gpt-5-chat-latest, gpt-4.1-nano, gpt-4.1-mini , gpt-4.1 , gpt-4o-mini, gpt-4o | ||||||||
text-embedding-ada-002, text-embedding-3-small2, text-embedding-3-large2 | ||||||||
Deprecated: gpt-4, gpt-3.5-turbo-instruct | ||||||||
Deprecated: gpt-3.5-turbo (ChatGPT) | ||||||||
gpt-57, gpt-5-nano7, gpt-5-mini7, gpt-5-chat-latest, gpt-4.1-nano, gpt-4.1-mini , gpt-4.1 , gpt-4o-mini, gpt-4o | ||||||||
text-embedding-ada-002, text-embedding-3-small2, text-embedding-3-large2 | ||||||||
Deprecated: gpt-4, gpt-3.5-turbo-instruct | ||||||||
Deprecated: gpt-3.5-turbo (ChatGPT) | ||||||||
| OpenAI-compatible LLMs | ||||||||
claude-opus-4-0, claude-sonnet-4-0 | ||||||||
claude-3-haiku, claude-3-7-sonnet-latest3, claude-3-5-sonnet-latest3, Deprecated: claude-3-opus | ||||||||
Deprecated: claude-v1-100k, claude-instant-v1 | ||||||||
gemini-2.5-pro7, gemini-2.5-flash7, gemini-2.5-flash-lite7, gemini-2.0-flash, gemini-2.0-flash-lite | ||||||||
luminous | ||||||||
luminous-embedding-1284 | ||||||||
anthropic.claude-3-5-sonnet-20240620-v1:0, amazon.nova-lite-v1:0, amazon.nova-pro-v1:0 | ||||||||
amazon.titan-embed-text-v2:05 | ||||||||
amazon.nova-micro-v1:0 | ||||||||
| Converse API-compatible models | Partially supported6 | |||||||
pixtral-12b-2409, mistral-large-latest3, mistral-medium-latest3, mistral-small-latest3, pixtral-large-latest3 |
More Information
1 Design-time features include Intent Sentence Generation, Flow Generation, Adaptive Card Generation, Lexicon Generation.
2 For Knowledge AI, we recommend using
text-embedding-ada-002. However, if you want to use text-embedding-3-small
and text-embedding-3-large, make sure that you familiarize yourself with the
restrictions of these models in Which Model to
Choose?.
3 The
*-latest suffix indicates that the model you
select in Cognigy.AI points to the the latest version of the model. For more
information, read
Anthropic’s
or Mistral
AI’s models
documentation.
4 This feature is currently in Beta, hidden behind the
FEATURE_ENABLE_ALEPH_ALPHA_EMBEDDING_LLM_WHITELIST feature flag, and may
contain issues. Only one type of embedding LLM should be used per Project. If
you choose to use luminous-embedding-128, you must create a new Project.
Once you have chosen an embedding model for a Project, you cannot switch to a
different embedding model; you must use a different Project. Failing to do so
will result in errors while this feature is in Beta.
5 For Cognigy.AI 2025.10 and earlier versions, the option to select this model is hidden behind the
FEATURE_ENABLE_AWS_BEDROCK_EMBEDDING_LLM_WHITELIST feature flag.
6 Note that some models from the Converse API might not support the AI Agent Node feature.
7 Reasoning models consume more tokens and may incur higher costs. The models are optimized for tasks that require complex problem-solving and logical reasoning. Before using these models in production, test token consumption in debug mode and use them with caution. To reduce costs, consider using a non-reasoning model such as
gpt-5-chat-latest. For more information about reasoning models, refer to the Microsoft Azure OpenAI, OpenAI, and Google documentation.