Prerequisites
- Kubernetes v1.21 - 1.23.
- Kubectl utility installed locally on Linux or macOS client host. The following guide does not support Windows client hosts.
- Helm v3.8+ installed on the client host.
- Yq installed on the client host.
- Kubernetes cluster meets general Cognigy.AI prerequisites, including hardware resources.
- Backup of Cognigy secrets for Kustomize installation (MongoDB and Redis connection strings) exists in the form of Kubernetes manifests.
- Multi-replica MongoDB Helm Chart is used. Cognigy.AI Helm Chart is incompatible with the single-replica MongoDB (mongo-server) installation. If you have not migrated from single to multi-replica, follow migration guide.
- Cognigy.AI Kustomize installation must be the same version as Cognigy.AI Helm Chart during migration.
- Cognigy.AI Kustomize installation must be >= v4.38.
- Snapshots/Backups of all PVCs/PVs (MongoDB, Redis-Persistent, flow-modules, flow-functions) are made before the migration starts.
Migration Checklist
There are 2 migration scenarios considered here:- Migration inside the existing cluster. Cognigy.AI Helm Chart in the
cognigy-ai
namespace and MongoDB Helm Chart in themongodb
namespace are installed alongside the existing Kustomize installation. We strongly recommend this scenario as this process significantly simplifies the migration of the existing storage. - Migration to a new cluster. Cognigy.AI and MongoDB Helm Charts are installed in a new cluster. This scenario is more complex than the first one. You will either need to ensure that underlying storage for existing PVCs can be reattached to the new cluster or restore the data from snapshots in the new cluster.
- Make sure backups (snapshots) for all PVCs are created in your Cloud Provider, including MongoDB, redis-persistent, flow-modules, and functions.
- Make sure a backup of Cognigy secrets for Kustomize installation is present.
- Prepare
values_prod.yaml
values file for Cognigy.AI Helm Chart as described here. Ensure that all adjustments (patches) of the current Kustomize installation done form your side are properly migrated tovalues_prod.yaml
file: ENV variables, resource request/limits, replica counts, etc. - Prepare the script from Rename MongoDB Databases section, fill in the required password values in advance.
Preparation for Migration
This section describes the procedure to prepare the migration of Cognigy.AI from Kustomize to Helm. These steps can be performed in advance and without bringing your Cognigy.AI installation down.Secrets
During migration, Cognigy.AI product will be moved fromdefault
to a different namespace. In this document, we consider cognigy-ai
as a target namespace, you can replace it with a namespace of your choice, but we strongly recommend using the cognigy-ai
namespace. Hence, it is required to migrate the existing secrets to the new namespace and inform Helm release about the migrated secrets. To do so, execute the following steps:
- The migration scripts can be found in this repository. Clone the repository and checkout to your current Cognigy.AI version:
- Place a backup of existing secrets in the
secrets
folder. - Copy the
secrets
folder into thekustomize-to-helm-migration-scripts
folder - Make sure that all the existing secrets are stored in the
secrets
folder before running the script. - Execute the script, it will generate new secrets for the Helm installation in the
migration-secrets
folder: - Apply the secrets into a new
cognigy-ai
namespace:
Persistent Volumes
This subsection describes the migration of persistent volumes for AWS (EBS and EFS with efs-provisioner) and AZURE (Azure disk and Azure files). If your Cognigy.AI is deployed on a different cloud provider, you need to adapt the migration steps accordingly. This subsection considers the Migration inside the existing cluster scenario. For the Migration to a new cluster scenario, you need to restore the data from snapshots of persistent volumes made in the old cluster. We do not provide any commands for the second case, as this process heavily depends on your cloud provider setup. Refer to your infrastructure data backup and restore processes and your cloud provider’s documentation.-
Create snapshots of existing Cognigy.AI PVCs:
flow-modules
,functions
,redis-persistent
-
To avoid loss of PVs during the migration, set
Reclaim Policy
toretain
for underlying PVs of 3 PVCs mentioned above and note down the corresponding PV names: -
Get the PVs IDs and note them down:
-
(AWS only) Get the IDs of underlying Volumes (EFS files shares) for all 2 PVs mentioned above and note them down. You will need to use these IDs in the following steps:
-
(AWS only): Set the IDs of
flow-modules
andfunctions
volumes obtained in the previous step in yourvalues_prod.yaml
for Cognigy.AI Helm Chart: -
(AWS only): For the Migration inside the existing cluster scenario, add annotations and labels to existing
flow-modules
andfunctions
storage classes and related role bindings: -
Save backups of PVC manifests for Kustomize and Helm installations:
-
Create another copy of PVC manifests, which will be modified in the next step:
-
Remove unnecessary fields from PVC:
- Edit PVC manifests saved in Step 8 for all 3 PVCs in the following way:
-
Change
metadata.namespace
tocognigy-ai
. -
Add
meta.helm.sh/release-name: cognigy-ai
andmeta.helm.sh/release-namespace: cognigy-ai
undermetadata.annotations
. -
Add
app.kubernetes.io/managed-by: Helm
undermetadata.labels
. -
Change
spec.volumeName
to the name of the respective PVs from Step 2.
Traefik
If you use theTraefik
reverse-proxy shipped with Cognigy.AI installation by default, you need to execute the following commands. You do not need to execute these commands if you use a 3rd-party reverse-proxy:
Migration
This section describes the actual migration of Cognigy.AI from Kustomize to Helm. The migration will require downtime of your Cognigy.AI installation. Plan a maintenance window for at least 2 hours accordingly.Rename MongoDB Databases
- Scale down the current installation:
- Rename the databases and create new users. In Cognigy.AI Helm Chart, we have renamed
service-analytics-collector-provider
database toservice-analytics-collector
andservice-analytics-conversation-collector-provider
toservice-analytics-conversation
. To rename the databases, execute the following script, fill in the password values in advance (see the comments inside the script). Check the root username for MongoDB Helm installation (root
oradmin
) and use that as<root_username
> while migrating the databases.
MongoDB Migration Script Compatibility
MongoDB Migration Script Compatibility
The script below is compatible with the cognigy-mongodb-helm-chart only. If you are using any other MongoDB service (for example, MongoDB Atlas), you need to find compatible commands for your database service to rename the databases.
Migrate Persistent Volumes for Cognigy.AI
-
Attach PVCs of
flow-modules
,functions
andredis-persistent
of Cognigy.AI Helm release to the existing PVs of Kustomize installation: -
Deploy the PVCs manifests, which have been modified in Prepare Persistent Volumes section.
Migrate Cognigy.AI from Kustomize to Helm
Perform the following steps for Cognigy.AI migration:-
Bring back the deployments of Cognigy.AI Helm Release:
-
Verify that all deployments are in a ready state:
-
(Traefik as reverse-proxy only) In case
EXTERNAL-IP
fortraefik
service of typeLoadBalancer
changes, update the DNS records to point to the newEXTERNAL-IP
oftraefik
Service. If you’re using Traefik Ingress with AWS Classic Load Balancer, change the CNAME of the DNS entries to the newEXTERNAL-IP
. Check the new external IP/CNAME record with:
Rollback
In case Cognigy.AI Helm release does not function properly, and rollback is required, perform the following steps:- Scale down the Cognigy.AI Helm Release deployments
- Delete PVCs for Helm Release:
- Restore PVCs for Kustomize installation:
- Bring back Kustomize installation:
- After Cognigy.AI Kustomize installation is up and running, you can clean up the Helm release by completely removing
cognigy-ai
namespace (the namespace of Helm release):
Clean-up
After Cognigy.AI Helm release is up and running properly, you can clean up the Kustomize installation, for this execute the following steps:-
Drop old databases in MongoDB (set
MONGODB_ROOT_USER
toroot
oradmin
in accordance withvalues_prod.yaml
in MongoDB Helm Chart): -
Delete the Kustomize deployments running in the
default
namespace:
- Delete the services in the
default
namespace:
kubernetes
service.
- Delete the ingresses in the
default
namespace:
- Delete PVCs from
default
namespace (if still present):
- (Optional) Delete PVC for single replica MongoDB setup in case of single-replica to multi-replica MongoDB migration: