Saltar al contenido

Configuración de un Cluster de MySQL en Linux con Pacemaker y Réplica de Datos con DRBD

12 diciembre 2020

Tabla de contenidos

En otros posts anteriores ya hemos visto por separado cómo configurar un cluster de Linux con Pacemaker, cómo configurar una réplica de datos con DRBD o cómo instalar MySQL. Sin embargo, me apetecía crear una guía que unificara todo eso.

[embedded content][embedded content]

Descripción del entorno

IPs de servicio

Este cluster activo-pasivo va a estar formado por dos nodos Linux CentOS 8 (clon de RedHat 8) con las siguientes IPs:

  • 10.0.3.27 pcm1
  • 10.0.3.65 pcm2

El fichero /etc/hosts de ambos nodos quedaría así:

[[email protected] ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 pcm1 10.0.3.27 pcm1
10.0.3.65 pcm2
[[email protected] ~]#

Almacenamiento

Cada uno de los nodos utilizará un disco independiente de cabina de 5GB donde almacenará la base de datos MySQL (/dev/xvdf):

[[email protected] ~]# fdisk -l |grep dev |grep -v mapper
Disk /dev/xvda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/xvda1 2048 4095 2048 1M 83 Linux
/dev/xvda2 * 4096 20971486 20967391 10G 83 Linux
Disk /dev/xvdf: 5 GiB, 5368709120 bytes, 10485760 sectors
[[email protected] ~]# [[email protected] ~]# fdisk -l |grep dev |grep -v mapper
Disk /dev/xvda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/xvda1 2048 4095 2048 1M 83 Linux
/dev/xvda2 * 4096 20971486 20967391 10G 83 Linux
Disk /dev/xvdf: 5 GiB, 5368709120 bytes, 10485760 sectors
[[email protected] ~]#

Instalación del repositorio EPEL

El repositorio EPEL es uno de los requerimientos necesarios para poder instalar todo el software que vamos a necesitar para instalar el cluster de Pacemaker.

Para instalarlo en Linux CentOS 8, ejecutaremos el siguiente comando en ambos nodos del cluster:

[[email protected] ~]# dnf repolist
repo id repo name
AppStream CentOS-8 - AppStream
BaseOS CentOS-8 - Base
epel Extra Packages for Enterprise Linux 8 - x86_64
epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64
extras CentOS-8 - Extras
[[email protected] ~]#

Configuración de la Réplica de Datos con DRBD

Instalación del repositorio de software de DRBD

DRBD necesita que instalemos un repositorio adicional que encontraremos en la documentación oficial de CentOS:

dnf -y install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm dnf install -y drbd kmod-drbd90

Los paquetes que, finalmente, quedan instalados son:

[[email protected] ~]# rpm -qa |grep -i drbd
drbd-9.13.1-1.el8.x86_64
drbd-udev-9.13.1-1.el8.x86_64
kmod-drbd90-9.0.25-2.el8_3.elrepo.x86_64
drbd-utils-9.13.1-1.el8.x86_64
[[email protected] ~]#

Réplicando un Logical Volume (LVM) con DRBD

Para utilizar la sincronización de discos con DRBD y utilizar los metadispositivos con LVM, seguiremos los siguientes pasos en todos los nodos del cluster de réplica:

  • Desactivamos el servicio de caché de LVM (lvm2-lvmetad).
  • Eliminamos el fichero /etc/lvm/cache/.cache.
  • Creamos la estructura de LVM y filesystems:
  • gzip /etc/lvm/lvm.conf

Lo siguiente, será configurar DRBD para que sincronice Logical Volumes, en vez de discos.

  • Creamos la estructura de LVM y filesystems en ambos nodos:
[[email protected] ~]# vgcreate vgmysql /dev/xvdf Physical volume "/dev/xvdf" successfully created. Volume group "vgmysql" successfully created
[[email protected] ~]# lvcreate -n lvmysql -l+100%FREE vgmysql Logical volume "lvmysql" created.
[[email protected] ~]# 
  • Configuramos drbd para sincronizar los LVs que nos interesan:
[[email protected] drbd.d]# pwd
/etc/drbd.d
[[email protected] drbd.d]# cat drbdmysql.res
resource drbdmysql { on pcm1 { device /dev/drbd0; disk /dev/vgmysql/lvmysql; meta-disk internal; address 10.0.3.27:7789; } on pcm2 { device /dev/drbd0; disk /dev/vgmysql/lvmysql; meta-disk internal; address 10.0.3.65:7789; }
}
[[email protected] drbd.d]#
  • Arrancamos el servicio de DRBD:
[[email protected] ~]# systemctl start drbd
[[email protected] ~]# systemctl start drbd [[email protected] ~]# drbdadm create-md drbdmysql
initializing activity log
initializing bitmap (160 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
success
[[email protected] ~]# [[email protected] ~]# drbdadm create-md drbdmysql
initializing activity log
initializing bitmap (160 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
success
[[email protected] ~]# [[email protected] ~]# drbdadm up drbdmysql
[[email protected] ~]# drbdadm primary --force drbdmysql
[[email protected] ~]# drbdadm status drbdmysql
drbdmysql role:Primary disk:UpToDate pcm2 role:Secondary peer-disk:UpToDate [[email protected] ~]#
  • Una vez sincronizados los discos, ya podremos crear un filesystem sobre el metadispositivo de DRBD:
[[email protected] ~]# mkfs.xfs /dev/drbd0
meta-data=/dev/drbd0 isize=512 agcount=4, agsize=327412 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1
data = bsize=4096 blocks=1309647, imaxpct=25 = sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[[email protected] ~]# [[email protected] ~]# mkdir /mysql
[[email protected] ~]# mount /dev/drbd0 /mysql
[[email protected] ~]# df -hP /mysql/
Filesystem Size Used Avail Use% Mounted on
/dev/drbd0 5.0G 68M 5.0G 2% /mysql
[[email protected] ~]#

Configuración de la base de datos MySQL

Instalación del software

Lo primero que vamos a hacer en ambos nodos del cluster será instalar el software de MySQL:

dnf install -y mysql mysql-server [[email protected] ~]# rpm -qa |grep -i mysql
mysql-common-8.0.21-1.module_el8.2.0+493+63b41e36.x86_64
mysql-errmsg-8.0.21-1.module_el8.2.0+493+63b41e36.x86_64
mysql-8.0.21-1.module_el8.2.0+493+63b41e36.x86_64
mysql-server-8.0.21-1.module_el8.2.0+493+63b41e36.x86_64
[[email protected] ~]#

Modificamos el directorio donde queremos que se guarden los datos de la base de datos (en ambos nodos del cluster):

[[email protected] ~]# cat /etc/my.cnf.d/mysql-server.cnf |grep -v "#" |grep -v ^$
[mysqld]
datadir=/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/mysql/mysqld.log
pid-file=/run/mysqld/mysqld.pid
[[email protected] ~]# chown -R mysql:mysql /mysql/

Una vez instalado el software, arrancamos el servicio:

[[email protected] ~]# systemctl start mysqld
[[email protected] ~]# systemctl status mysqld
● mysqld.service - MySQL 8.0 database server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2020-12-08 11:14:11 UTC; 6s ago Process: 4225 ExecStartPost=/usr/libexec/mysql-check-upgrade (code=exited, status=0/SUCCESS) Process: 4099 ExecStartPre=/usr/libexec/mysql-prepare-db-dir mysqld.service (code=exited, status=0/SUCCESS) Process: 4074 ExecStartPre=/usr/libexec/mysql-check-socket (code=exited, status=0/SUCCESS) Main PID: 4181 (mysqld) Status: "Server is operational" Tasks: 39 (limit: 23576) Memory: 427.6M CGroup: /system.slice/mysqld.service └─4181 /usr/libexec/mysqld --basedir=/usr Dec 08 11:14:05 pcm1 systemd[1]: Starting MySQL 8.0 database server...
Dec 08 11:14:05 pcm1 mysql-prepare-db-dir[4099]: Initializing MySQL database
Dec 08 11:14:11 pcm1 systemd[1]: Started MySQL 8.0 database server.
[[email protected] ~]#

Creamos un usuario en la base de datos de MySQL

Esta acción es, simplemente, para testear que la base de datos funciona correctamente:

[[email protected] ~]# mysql -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 8.0.21 Source distribution Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> create user 'mysqltestuser' identified by 'Martinez8';
Query OK, 0 rows affected (0.00 sec) mysql>
mysql> GRANT ALL PRIVILEGES ON *.* TO mysqltestuser;
Query OK, 0 rows affected (0.00 sec) mysql> select user from mysql.user;
+------------------+
| user |
+------------------+
| mysqltestuser |
| mysql.infoschema |
| mysql.session |
| mysql.sys |
| root |
+------------------+
5 rows in set (0.00 sec) mysql>

Instalación de Pacemaker y configuración de los recursos del cluster

Una vez que ya hemos instalado DRBD y comprobado que la réplica de la base de datos funciona correctamente, el objetivo es crear un cluster con Pacemaker que active la réplica en el nodo que queremos levantar el servicio y monte el filesystem de manera automática.

Lo primero que tenemos que hacer es activar el repositorio “High Availability” de Centos para instalar Pacemaker.

[[email protected] ~]# dnf repolist all |grep -i ha
HighAvailability CentOS-8 - HA disabled
[[email protected] ~]#
[[email protected] ~]# dnf config-manager --set-enabled HighAvailability
[[email protected] ~]#

Ahora ya podemos instalar Pacemaker en ambos nodos del cluster:

dnf install -y pcs pacemaker fence-agents-common drbd-pacemaker

Configuración del cluster

El instalador nos crea el usuario “hacluster”, al que le tenemos que configurar una contraseña:

[[email protected] ~]# passwd hacluster
Changing password for user hacluster.
New password:
BAD PASSWORD: The password fails the dictionary check - it is based on a dictionary word
Retype new password:
passwd: all authentication tokens updated successfully.
[[email protected] ~]#

Hacemos lo mismo en el nodo secundario.

Arrancamos el servicio de Pacemaker pcsd en ambos nodos:

[[email protected] ~]# systemctl start pacemaker
[[email protected] ~]# systemctl enable pacemaker
[[email protected] ~]# systemctl start pcsd
[[email protected] ~]# systemctl enable pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
[[email protected] ~]# systemctl status pcsd
● pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2020-12-08 12:29:26 UTC; 1min 2s ago Docs: man:pcsd(8) man:pcs(8) Main PID: 2816 (pcsd) Tasks: 1 (limit: 23576) Memory: 38.5M CGroup: /system.slice/pcsd.service └─2816 /usr/libexec/platform-python -Es /usr/sbin/pcsd Dec 08 12:29:25 pcm1 systemd[1]: Starting PCS GUI and remote configuration interface...
Dec 08 12:29:26 pcm1 pcsd[2816]: INFO:pcs.daemon:Starting server...
Dec 08 12:29:26 pcm1 pcsd[2816]: INFO:pcs.daemon:Binding socket for address '*' and port '2224'
Dec 08 12:29:26 pcm1 pcsd[2816]: INFO:pcs.daemon:Server is listening
Dec 08 12:29:26 pcm1 pcsd[2816]: INFO:pcs.daemon:Notifying systemd we are running (socket '/run/systemd/notify')
Dec 08 12:29:26 pcm1 systemd[1]: Started PCS GUI and remote configuration interface.
Dec 08 12:29:26 pcm1 pcsd[2816]: INFO:pcs.daemon:Config files sync started
Dec 08 12:29:26 pcm1 pcsd[2816]: INFO:pcs.daemon:Config files sync skipped, this host does not seem to be in a cluster of at least 2 nodes
[[email protected] ~]#

Configuramos los nodos que van a formar parte del cluster:

[[email protected] ~]# pcs host auth pcm1 pcm2 -u hacluster -p ContraseñaSecreta
pcm2: Authorized
pcm1: Authorized
[[email protected] ~]#

Creamos el cluster para MySQL:

[[email protected] ~]# pcs cluster setup --start MySQLCL pcm1 pcm2
No addresses specified for host 'pcm1', using 'pcm1'
No addresses specified for host 'pcm2', using 'pcm2'
Destroying cluster on hosts: 'pcm1', 'pcm2'...
pcm2: Successfully destroyed cluster
pcm1: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'pcm1', 'pcm2'
pcm1: successful removal of the file 'pcsd settings'
pcm2: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'pcm1', 'pcm2'
pcm1: successful distribution of the file 'corosync authkey'
pcm1: successful distribution of the file 'pacemaker authkey'
pcm2: successful distribution of the file 'corosync authkey'
pcm2: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'pcm1', 'pcm2'
pcm1: successful distribution of the file 'corosync.conf'
pcm2: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
Starting cluster on hosts: 'pcm1', 'pcm2'...
[[email protected] ~]# 

Configuración de los recursos del cluster de Pacemaker

IP de servicio

Lo normal es que MySQL escuchara por una IP de servicio que se pueda activar o desactivar en el nodo donde vaya estar arrancada la base de datos. Lo haríamos así:

[[email protected] ~]# pcs property set stonith-enabled=false
[[email protected] ~]# pcs resource create MySQLIP IPAddr ip=10.0.3.50 cidr_netmask=24 op monitor interval=30s
Assumed agent name 'ocf:heartbeat:IPaddr' (deduced from 'IPAddr')
[[email protected] ~]# [[email protected] ~]# pcs status
Cluster name: MySQLCL
Cluster Summary: * Stack: corosync * Current DC: pcm2 (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum * Last updated: Tue Dec 8 15:05:02 2020 * Last change: Tue Dec 8 15:04:47 2020 by root via cibadmin on pcm1 * 2 nodes configured * 1 resource instance configured Node List: * Online: [ pcm1 pcm2 ] Full List of Resources: * MySQLIP (ocf::heartbeat:IPaddr): Started pcm1 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
[[email protected] ~]#

Creación del recurso de DRBD de Pacemaker

  • Creamos el recurso de DRBD:
[[email protected] ~]# pcs resource create fs_drbd0 ocf:heartbeat:Filesystem device=/dev/drbd0 directory=/mysql fstype=xfs
[[email protected] ~]# [[email protected] ~]# pcs status
Cluster name: MySQLCL
Cluster Summary: * Stack: corosync * Current DC: pcm2 (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum * Last updated: Sat Dec 12 11:02:45 2020 * Last change: Sat Dec 12 10:59:19 2020 by root via cibadmin on pcm1 * 2 nodes configured * 1 resource instance configured Node List: * Online: [ pcm1 pcm2 ] Full List of Resources: * fs_drbd0 (ocf::heartbeat:Filesystem): Started pcm1 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
[[email protected] ~]#
  • Comprobamos que, realmente, el recurso está arrancado en el nodo que toca:
[[email protected] ~]# df -hP /mysql/
Filesystem Size Used Avail Use% Mounted on
/dev/drbd0 5.0G 231M 4.8G 5% /mysql
[[email protected] ~]#

Pruebas de balanceo de los recursos o servicios del cluster

  • Ahora lo vamos a mover al nodo secundario:
[[email protected] ~]# pcs resource move fs_drbd0 pcm2
[[email protected] ~]# pcs status
Cluster name: MySQLCL
Cluster Summary: * Stack: corosync * Current DC: pcm2 (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum * Last updated: Sat Dec 12 11:03:30 2020 * Last change: Sat Dec 12 11:03:24 2020 by root via crm_resource on pcm1 * 2 nodes configured * 1 resource instance configured Node List: * Online: [ pcm1 pcm2 ] Full List of Resources: * fs_drbd0 (ocf::heartbeat:Filesystem): Started pcm2 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
[[email protected] ~]#
  • En el nodo pcm1 se ha desmontado el filesystem:
[[email protected] ~]# df -hP /mysql
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 8.0G 2.6G 5.5G 32% /
[[email protected] ~]#
  • Ahora vamos a comprobar el nodo pcm2:
[[email protected] ~]# df -hP /mysql
Filesystem Size Used Avail Use% Mounted on
/dev/drbd0 5.0G 231M 4.8G 5% /mysql
[[email protected] ~]# ls -la /mysql/
total 164308
drwxr-xr-x 6 mysql mysql 4096 Dec 12 10:24 .
dr-xr-xr-x. 19 root root 269 Dec 12 10:04 ..
-rw-r----- 1 mysql mysql 56 Dec 8 12:10 auto.cnf
-rw-r----- 1 mysql mysql 697 Dec 8 12:11 binlog.000001
-rw-r----- 1 mysql mysql 179 Dec 8 12:39 binlog.000002
-rw-r----- 1 mysql mysql 179 Dec 12 10:24 binlog.000003
-rw-r----- 1 mysql mysql 48 Dec 12 10:05 binlog.index
-rw------- 1 mysql mysql 1676 Dec 8 12:10 ca-key.pem
-rw-r--r-- 1 mysql mysql 1112 Dec 8 12:10 ca.pem
-rw-r--r-- 1 mysql mysql 1112 Dec 8 12:10 client-cert.pem
-rw------- 1 mysql mysql 1676 Dec 8 12:10 client-key.pem
-rw-r----- 1 mysql mysql 196608 Dec 12 10:07 '#ib_16384_0.dblwr'
-rw-r----- 1 mysql mysql 8585216 Dec 8 12:10 '#ib_16384_1.dblwr'
-rw-r----- 1 mysql mysql 3328 Dec 12 10:24 ib_buffer_pool
-rw-r----- 1 mysql mysql 12582912 Dec 12 10:24 ibdata1
-rw-r----- 1 mysql mysql 50331648 Dec 12 10:07 ib_logfile0
-rw-r----- 1 mysql mysql 50331648 Dec 8 12:10 ib_logfile1
drwxr-x--- 2 mysql mysql 6 Dec 12 10:24 '#innodb_temp'
drwxr-x--- 2 mysql mysql 143 Dec 8 12:10 mysql
-rw-r----- 1 mysql mysql 4136 Dec 12 10:24 mysqld.log
-rw-r----- 1 mysql mysql 25165824 Dec 12 10:05 mysql.ibd
-rw-r--r-- 1 mysql mysql 7 Dec 8 12:10 mysql_upgrade_info
drwxr-x--- 2 mysql mysql 8192 Dec 8 12:10 performance_schema
-rw------- 1 mysql mysql 1680 Dec 8 12:10 private_key.pem
-rw-r--r-- 1 mysql mysql 452 Dec 8 12:10 public_key.pem
-rw-r--r-- 1 mysql mysql 1112 Dec 8 12:10 server-cert.pem
-rw------- 1 mysql mysql 1676 Dec 8 12:10 server-key.pem
drwxr-x--- 2 mysql mysql 28 Dec 8 12:10 sys
-rw-r----- 1 mysql mysql 10485760 Dec 12 10:07 undo_001
-rw-r----- 1 mysql mysql 10485760 Dec 12 10:07 undo_002
[[email protected] ~]#
  • Vamos a crear un segundo usuario de test en la base de datos MySQL en el nodo pcm2:
[[email protected] ~]# mysql -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.21 Source distribution Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> create user 'mysqltestuser2' identified by 'Martinez8';
Query OK, 0 rows affected (0.11 sec) mysql> exit
Bye
[[email protected] ~]
  • Ahora volvemos a mover el recurso al nodo pcm1:
[[email protected] ~]# pcs resource move fs_drbd0 pcm1
[[email protected] ~]# pcs status
Cluster name: MySQLCL
Cluster Summary: * Stack: corosync * Current DC: pcm2 (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum * Last updated: Sat Dec 12 11:09:45 2020 * Last change: Sat Dec 12 11:09:41 2020 by root via crm_resource on pcm1 * 2 nodes configured * 1 resource instance configured Node List: * Online: [ pcm1 pcm2 ] Full List of Resources: * fs_drbd0 (ocf::heartbeat:Filesystem): Started pcm1 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
[[email protected] ~]#
  • Comprobamos que se ha replicado la base de datos:
mysql> select user from mysql.user;
+------------------+
| user |
+------------------+
| mysqltestuser |
| mysqltestuser2 |
| mysql.infoschema |
| mysql.session |
| mysql.sys |
| root |
+------------------+
6 rows in set (0.00 sec) mysql>

Te puede interesar

Anuncios 10⭐Estrellas

La mejor forma de potenciar tu negocio a través de la plataforma de Google Ad Manager