Managing a self-hosted blog is both enjoyable and educational, offering a deep dive into a range of technical aspects related to infrastructure, orchestration, containerization, or database management. However, it's important to note that this journey comes with its own set of challenges. Every problem we encounter requires a hands-on approach to resolution, as we are the ones in charge of troubleshooting and fixing issues. This aspect, while daunting, significantly contributes to our skill development.
The blog operates on the Ghost platform, which is fundamentally supported by a MySQL database. To enhance reliability and ensure continuous operation, I have set up a 3-node MySQL cluster, employing the InnoDB Storage Engine. This configuration not only bolsters the system's resilience against failures but also ensures that the blog remains highly available. The multi-node cluster approach significantly reduces the risk of downtime, as each node in the cluster can take over in case one fails, thereby maintaining uninterrupted service.
The storage needs for the MySQL database are fulfilled and managed using OpenEBS cStore, a sophisticated storage solution. This system is key in providing dynamic volume provisioning, which allows for the flexible allocation of storage resources as needed. Additionally, OpenEBS cStore enhances data security and accessibility through its volume replication feature. This means that data is duplicated across different storage volumes, ensuring that there's always a backup available in case of any single point of failure.
When a volume replica experiences a malfunction, OpenEBS cStor automatically transitions into a read-only state. This precautionary measure is crucial for maintaining data integrity, as it prevents any further write operations that could potentially lead to data corruption. By restricting access to read-only mode, OpenEBS cStor ensures that while the data remains accessible for reading and querying purposes, no modifications can be made until the issue is resolved.
Recently, I faced a problem with a stalled virtual machine, which activated the fallback mechanism of OpenEBS cStor, resulting in several provisioned volumes switching to read-only mode. Consequently, this caused some MySQL instances to fall out of sync with the cluster, compromising the cluster's fault tolerance.
Understanding the Cluster Status
I needed to grasp the status of the cluster, identifying which nodes were functioning properly and which were out of sync. To accomplish this, I logged into a functioning pod hosting a MySQL instance and executed commands using mysqlsh
. That's a real-world cluster status output I saw:
MySQL localhost:33060+ ssl JS > cluster = dba.getCluster()
<Cluster:mysql_innodbcluster>
MySQL localhost:33060+ ssl JS > cluster.status()
{
"clusterName": "mysql_innodbcluster",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql-innodbcluster-0.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE_PARTIAL",
"statusText": "Cluster is NOT tolerant to any failures. 2 members are not active.",
"topology": {
"mysql-innodbcluster-0.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306": {
"address": "mysql-innodbcluster-0.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"mysql-innodbcluster-1.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306": {
"address": "mysql-innodbcluster-1.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306",
"memberRole": "SECONDARY",
"mode": "n/a",
"readReplicas": {},
"role": "HA",
"shellConnectError": "MySQL Error 2003: Could not open connection to 'mysql-innodbcluster-1.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306': Can't connect to MySQL server on 'mysql-innodbcluster-1.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306' (110)",
"status": "(MISSING)"
},
"mysql-innodbcluster-2.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306": {
"address": "mysql-innodbcluster-2.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306",
"memberRole": "SECONDARY",
"mode": "n/a",
"readReplicas": {},
"recovery": {
"cloneStartTime": "2023-08-18 09:09:02.645",
"cloneState": "Completed",
"currentStage": "RECOVERY",
"currentStageState": "Completed"
},
"recoveryStatusText": "Cloning in progress",
"role": "HA",
"status": "RECOVERING",
"version": "8.0.32"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "mysql-innodbcluster-0.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306"
}
This output indicates that the cluster is in a state OK_NO_TOLERANCE_PARTIAL
where it cannot tolerate any failures. Two members of the cluster are inactive, which is a critical issue for the cluster's high availability.
Remedy
The first step in my troubleshooting process involved resolving the volume integrity issue. To accomplish this, I rebooted the virtual machine, which was a crucial action in the recovery process. Additionally, I referred to the detailed solutions and insights provided in the following post.
It offered specific guidance on handling OpenEBS cStor volumes that were stuck in offline mode. This resource was particularly helpful in understanding the nuances of the problem and implementing the necessary steps to rectify the issue.
Once the volume issue was successfully addressed, the next step was to focus on restoring the MySQL cluster. This stage was crucial as it involved bringing the MySQL instances back into sync and ensuring the overall integrity and functionality of the cluster.
To reintegrate nodes into the cluster, we faced two choices: either rebuild the instance using a fresh backup and restore it from the primary, or eliminate the cluster metadata from the instance and then rejoin it to the cluster. I opted for the latter approach. Below are the steps I followed.
Access and Rescan the Cluster: Login to the working node and enter mysqlsh
. Use dba.getCluster()
to obtain the cluster instance and cluster.rescan()
to refresh the state of the cluster.
bash-4.4$ mysqlsh
Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
MySQL Shell 8.0.32
Copyright (c) 2016, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.
Type '\help' or '\?' for help; '\quit' to exit.
MySQL JS > \c root@localhost
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 24554083 (X protocol)
Server version: 8.0.32 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
MySQL localhost:33060+ ssl JS > cluster = dba.getCluster();
<Cluster:mysql_innodbcluster>
MySQL localhost:33060+ ssl JS > cluster.rescan()
Rescanning the cluster...
Result of the rescanning operation for the 'mysql_innodbcluster' cluster:
{
"name": "mysql_innodbcluster",
"newTopologyMode": null,
"newlyDiscoveredInstances": [],
"unavailableInstances": [
{
"host": "mysql-innodbcluster-1.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306",
"label": "mysql-innodbcluster-1.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306",
"member_id": "5c9566c5-b6db-11ed-8f62-e6cbb8217eb4"
}
],
"updatedInstances": []
}
The instance 'mysql-innodbcluster-1.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306' is no longer part of the cluster.
The instance is either offline or left the HA group. You can try to add it to the cluster again with the cluster.rejoinInstance('mysql-innodbcluster-1.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306') command or you can remove it from the cluster configuration.
Would you like to remove it from the cluster metadata? [Y/n]: Y
Removing instance from the cluster metadata...
The instance 'mysql-innodbcluster-1.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306' was successfully removed from the cluster metadata.
Dropping unused recovery account: 'mysql_innodb_cluster_1001'@'%'
Confirm Node Removal: If a node is not functioning, remove it from the cluster. Press 'Y'
 to remove the missing node on the interactive MySQL Shell.
Prepare the Missing Node: Login to the missing node and enter mysql
. Set read-only to off.
mysql> SET GLOBAL SUPER_READ_ONLY = OFF;
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL READ_ONLY = OFF;
Query OK, 0 rows affected (0.00 sec)
Stop Group Replication: Use STOP GROUP_REPLICATION
to prevent conflicts during rejoining.
mysql> STOP GROUP_REPLICATION;
Query OK, 0 rows affected (0.00 sec)
Reset Slave Configuration: Clear replication settings with RESET SLAVE ALL
.
mysql> RESET SLAVE ALL;
Query OK, 0 rows affected, 1 warning (0.00 sec)
Drop Cluster Metadata: Remove the metadata.
mysql> DROP DATABASE mysql_innodb_cluster_metadata;
Query OK, 21 rows affected (1.27 sec)
Rejoin the Node: Go back to the working node to mysqlsh
. Use cluster.addInstance(instance)
and choose "Clone" for data synchronization.
Monitor Recovery Status: For nodes in the recovery process, like mysql-innodbcluster-2
, monitor the progress and ensure completion of the cloning process.
For each of the nodes that had fallen out of sync, I needed to carry out the same set of recovery steps. This repetition ensured that both nodes were properly aligned and functioning within the cluster's framework. Once this process was completed for both nodes, the cluster's ability to withstand and manage faults was restored. This restoration was crucial not only for the immediate operational integrity of the cluster but also for its long-term resilience, ensuring that it could effectively handle similar issues in the future without significant disruption to its performance or data integrity.
MySQL localhost:33060+ ssl JS > dba.getCluster().status()
{
"clusterName": "mysql_innodbcluster",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql-innodbcluster-0.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"mysql-innodbcluster-0.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306": {
"address": "mysql-innodbcluster-0.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"mysql-innodbcluster-1.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306": {
"address": "mysql-innodbcluster-1.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"mysql-innodbcluster-2.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306": {
"address": "mysql-innodbcluster-2.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "mysql-innodbcluster-0.mysql-innodbcluster-instances.mysql.svc.cluster.local:3306"
}
Conclusion
This guide provides a detailed approach to addressing InnoDB cluster conflicts, especially when the cluster shows no tolerance for failures. By understanding the cluster's status and carefully executing each step, we can restore our cluster's health and ensure its high availability.
Remember, every cluster environment is unique. Tailor these steps to fit your specific situation, and always have backups and recovery plans in place before making significant changes to your cluster.