MongoDB: Single to Multi Replica¶
The migration process from a single replica to a multi-replica setup with MongoDB Helm Chart involves several steps. These steps are described in the following section. In this guide, we assume that the old MongoDB installation is deployed in the default
namespace, and we will install the new MongoDB ReplicaSet into the mongodb
namespace. We strongly recommend performing migration inside the existing cluster as it simplifies data migration process.
Setting up Multi-Replica MongoDB Helm Chart¶
- Set up a 3 replica MongoDB Helm Release following the official guide here.
- You will need to set the root password in the new setup to the same value as in the old one. You can find out the root password for the existing installation by executing the following command on the current Kubernetes cluster:
Use this password as
kubectl get secret -n default cognigy-mongo-server -ojsonpath='{.data.mongo-initdb-root-password}' | base64 --decode
auth.rootPassword
andmetrics.password
in thevalues_prod.yaml
file for the new setup.
Modifying MongoDB Connection String Secrets¶
To access MongoDB, Cognigy.AI services use Kubernetes secrets which contain a database connection string. The secrets must be adjusted for the new MongoDB setup. To automate this process, a script can be found in this repository. Ensure that all old secrets are stored in secrets
folder before executing the script:**
git clone https://github.com/Cognigy/cognigy-mongodb-helm-chart.git
cd scripts
chmod +x secrets-migration.sh
./secrets-migration.sh
mongo-server:27017
mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017,mongodb-1.mongodb-headless.mongodb.svc.cluster.local:27017,mongodb-2.mongodb-headless.mongodb.svc.cluster.local:27017
./secrets-migration.sh
Enter current MongoDB host: mongo-server:27017
Enter new MongoDB host(s): mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017,mongodb-1.mongodb-headless.mongodb.svc.cluster.local:27017,mongodb-2.mongodb-headless.mongodb.svc.cluster.local:27017
original_secrets
and the adjusted ones in a folder called new_secrets
.
MongoDB Data Migration¶
-
This step will require downtime of your Cognigy.AI installation. Before starting the MongoDB data migration, you need to scale down the Cognigy.AI installation deployments:
for i in $(kubectl get deployment --namespace default --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'|grep service-) do kubectl --namespace default scale --replicas=0 deployment $i done
-
Find out the primary node of the new mongoDB cluster by executing
rs.status()
:kubectl exec -it -n mongodb mongodb-0 bash mongo -u root -p $MONGODB_ROOT_PASSWORD --authenticationDatabase admin rs.status()
-
If you are setting up the Multi-replica MongoDB setup:
- on a different Kubernetes cluster - skip to step 5.
- on the same Kubernetes cluster where single-replica MongoDB is running - connect to the primary MongoDB pod. For example, if
mongodb-0
is the primary node:kubectl exec -it mongodb-0 -n mongodb -- bash
-
After attaching to the primary pod of the multi-replica MongoDB setup, execute the following command to take a dump of an existing database and restore it in the multi-replica MongoDB:
mongodump --archive --authenticationDatabase admin -u admin -p <password> --host "mongo-server.default.svc:27017" | mongorestore --host "mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017" --authenticationDatabase admin -u root -p <password> --archive --drop
-
If you are setting up the multi-replica MongoDB setup on a different Kubernetes cluster, you have to dump the existing database to your local client filesystem and import it into the multi-replica setup afterward. The time of this operation heavily depends on the size of your database and your internet connection speed. To speed up the process, you can execute the commands from a server running in the same data center where your Kubernetes clusters run. In case you follow this scenario, we strongly recommend testing the dump process in advance to evaluate the downtime duration:
- To make a dump to the local file system, log in to the old single replica MongoDB pod:
kubectl exec -it deployment/mongo-server -- bash mkdir -p ./tmp/backup mongodump --authenticationDatabase admin -u admin -p <password> --host "mongo-server.default.svc:27017" --out ./tmp/backup exit kubectl cp -n default <mongodb-pod-id>:/tmp/backup <path-to-the-local-directory>
- Import the data into a multi-replica MongoDB cluster:
Here
kubectl cp -n mongodb <path-to-the-local-directory> mongodb-0:/tmp/ kubectl exec -it mongodb-0 -n mongodb -- bash mongorestore --host "mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017" --authenticationDatabase admin -u root -p <password> ./tmp/<backup-folder>
mongodb-0
considered the primary node. Change it if you have different primary node, for example,mongodb-1
ormongodb-2
.
- To make a dump to the local file system, log in to the old single replica MongoDB pod:
-
Replace the existing secrets with new secrets:
In case of a rollback, the old secrets can be restored by executing the following:kubectl replace -f new_secrets
kubectl delete -f new_secrets kubectl apply -f original_secrets
- Scale up all the deployments back to check if everything works as expected.
- Move the secrets from the
new_secrets
folder tosecrets
folder and delete theoriginal_secrets
folder.