OpenAI-Compatible LLMsΒΆ
Warning
You are subject to the terms and conditions of the third-party LLM providers you are using.
To start using an OpenAI-compatible model with Cognigy.AI features, follow these steps:
Add ModelsΒΆ
You can add a model using one of the following interfaces:
Add Models via GUIΒΆ
You can add a model compatible with OpenAI's API standards to Cognigy.AI in Build > LLM. To add the model, you will need the following parameters:
Parameter | Description |
---|---|
apiKey | Add an API Key from the respective provider. |
Model | Select Custom Model. |
Model Type | Select Chat for a model compatible with the https://api.openai.com/v1/chat/completions API, Completion for one compatible with the https://api.openai.com//v1/completions API, and Embedding for one compatible with the https://api.openai.com//v1/embeddings API. For more information, refer to the OpenAI Text Generation Models documentation. |
Model Name | Enter the name of the model supported by the respective provider. To find model names, refer to the LLM provider's documentation. |
Base URL | Enter the base URL of the LLM provider. For example, https://api-inference.huggingface.co/v1/ for Hugging Face. |
Custom Authentication Header Name | Enter the name of a custom HTTP header for authentication, such as X-Custom-Auth . This parameter is helpful when integrating with OpenAI-compatible models or third-party APIs that don't accept the standard Authorization header. Instead of sending the API key as Authorization: Bearer <your-api-key> , the system will send it using your custom header name. For example, X-Custom-Auth: <your-api-key> . |
Apply changes. Check if the connection was set up by clicking Test.
Add Models via the APIΒΆ
You can add either a standard or custom model using the Cognigy.AI API POST /v2.0/largelanguagemodels request. Then, test your connection for the created model via the Cognigy.AI API POST /v2.0/largelanguagemodels/{largeLanguageModelId}/test.
Apply the ModelΒΆ
To apply a model, follow these steps:
- In the left-side menu of the Project, go to Manage > Settings.
- Go to the section based on your use case for using a model:
- Generative AI Settings. In the Generative AI Settings section, activate Enable Generative AI Features. This setting is toggled on by default if you have previously set up the Generative AI credentials.
- Knowledge AI Settings. Use this section if you need to add a model for Knowledge AI. Select a model for the Knowledge Search and Answer Extraction features. Refer to the list of standard models and find the models that support these features.
- Navigate to the desired feature and select a model from the list. If there are no models available for the selected feature, the system will automatically select None. Save changes.