MySQL相关架构及ansible自动化

本文详细介绍MySQL主从复制配置、MHA高可用集群搭建、Percona XtraDB Cluster部署及Ansible自动化部署MySQL 8.0的过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1、如果主节点已经运行了一段时间,且有大量数据时,如何配置并启动slave节点(写出操作步骤)

(1).在主服务器端进行完全备份,命令如下

mysqldump -uUSER -pPASSWORD -A -F --single-transaction --master-data=1 > fullback.sql

​ (2).把生成的完全备份文件发送到从节点服务器,命令如下

scp fullback.sql root@SLAVEip:

​ (3).配置从节点,先从完全备份文件中过滤出二进制日志的节点,命令如下

grep '^CHANGE MASTER' fullbackup.sql
CHANGE MASTER TO MASTER_LOG_FILE='mariadb-bin.00000#', MASTER_LOG_POS=#;

​ (4).在fullbackup.sql中的’CHANGE MASTER TO’后添加主从复制信息如下,之后通过该文件实现数据建库同步

#在fullbackup.sql中添加如下信息
vim fullback.sql
CHANGE MASTER TO
MASTER_HOST='masterhostip',  #主节点ip
MASTER_USER='USER',  #主节点创建的用于主从复制的用户
MASTER_PASSWORD='password',
MASTER_PORT=3306,                                                              
          MASTER_LOG_FILE='mariadb-bin.00000#', MASTER_LOG_POS=#;   #过滤出的二进制节点
#同步数据
mysql -uUSER -pPASSWORD < fullback.sql

​ (5).开启从节点同步,并查看是否成功同步

mysql >start slave;
mysql >show slave status\G

2、当master服务器宕机,提升一个slave成为新的master(写出操作步骤)

​ (1).查看各个从节点的同步状态,优先选择提升数据最新和主服务器数据最相近的一个从节点作为新的master

​ (2).确认好要提升的服务器之后修改其数据库配置文件,关闭read-only,并开启二进制日志

#配置/etc/my.cnf
server-id=#
read-only=OFF
log-bin

​ (3).在新的master上进行完全备份,并把备份文件发送到其它的从节点服务器上

mysqldump -uUSER -pPASSWORD -A -F --single-transaction --master-data=1 > fullback.sql
scp fullback.sql root@SLAVEip:

​ (4).分析旧的master 的二进制日志,将未同步到至新master的二进制日志导出来,恢复到新master,尽可能 恢复数据

grep '^CHANGE MASTER' fullbackup.sql
CHANGE MASTER TO MASTER_LOG_FILE='mariadb-bin.00000#', MASTER_LOG_POS=#;

​ (5).其它所有 slave 重新还原数据库,指向新的master

vim backup.sql
CHANGE MASTER TO
MASTER_HOST='masterhostip',  #主节点ip
MASTER_USER='USER',  #主节点创建的用于主从复制的用户
MASTER_PASSWORD='password',
MASTER_PORT=3306,                                                              
          MASTER_LOG_FILE='mariadb-bin.00000#', MASTER_LOG_POS=#;   #过滤出的二进制节点
#清空旧的同步信息,并加载新的同步信息
stop slave;
reset slave all;
set sql_log_bin=off;
source fullback.sql;
set sql_log_bin=on;
start slave;
show slave status\G

3、通过 MHA 0.58 搭建一个数据库集群结构

环境准备:

mha管理端:10.0.0.7
master:10.0.0.8
slave1:10.0.0.18
slave2:10.0.0.28

安装mha包

[root@mha ~]#wget https://github.com/yoshinorim/mha4mysql-manager/releases/download/v0.58/mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
[root@mha ~]#wget https://github.com/yoshinorim/mha4mysql-node/releases/download/v0.58/mha4mysql-node-0.58-0.el7.centos.noarch.rpm
[root@mha ~]#ls
mha4mysql-manager-0.58-0.el7.centos.noarch.rpm  mha4mysql-node-0.58-0.el7.centos.noarch.rpm
#并在每个机器上安装mha4mysql-node-0.58-0.el7.centos.noarch.rpm包
yum -y install mha4mysql-node-0.58-0.el7.centos.noarch.rpm
#在mha机器上安装所有包
yum -y install mha4mysql-node-0.58-0.el7.centos.noarch.rpm mha4mysql-manager-0.58-0.el7.centos.noarch.rpm

在除mha机器之外的每个机器上安装mysql

yum -y install mysql-server

配置mysql主服务器

[root@master ~]#vim /etc/my.cnf.d/mysql-server.cnf
[mysqld]
server-id=1
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysql/mysqld.log
pid-file=/run/mysqld/mysqld.pid
log-bin
skip_name_resolve=1
general_log 
#启动服务并创建和授权用于主从复制和mha的账号
[root@master ~]#systemctl start mysqld
[root@master ~]#mysql
mysql> show master logs;
+--------------------+-----------+-----------+
| Log_name           | File_size | Encrypted |
+--------------------+-----------+-----------+
| centos8-bin.000001 |       179 | No        |
| centos8-bin.000002 |       156 | No        |
+--------------------+-----------+-----------+
2 rows in set (0.00 sec)
mysql> create user repluser@'10.0.0.%' identified by '123456';
Query OK, 0 rows affected (0.01 sec)

mysql> grant replication slave on *.* to repluser@"10.0.0.%";
Query OK, 0 rows affected (0.00 sec)

mysql> create user mhauser@'10.0.0.%' identified by '123456';
Query OK, 0 rows affected (0.01 sec)

mysql> grant all on *.* to mhauser@'10.0.0.%';
Query OK, 0 rows affected (0.01 sec)

配置从服务器slave1

[root@slave1 ~]#vim /etc/my.cnf.d/mysql-server.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysql/mysqld.log
pid-file=/run/mysqld/mysqld.pid
server_id=2
log-bin
read_only
relay_log_purge=0
skip_name_resolve=1
#启动服务并实现主从同步
mysql> change master to
    -> master_host='10.0.0.8',
    -> master_port=3306,
    -> master_user='repluser',
    -> master_password='123456',
    -> master_log_file='master-bin.000002',master_log_pos=156;
Query OK, 0 rows affected, 2 warnings (0.05 sec)
mysql> start slave;

配置从服务器slave2

[root@slave2 ~]#vim /etc/my.cnf.d/mysql-server.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysql/mysqld.log
pid-file=/run/mysqld/mysqld.pid
server_id=3
read_only
relay_log_purge=0
skip_name_resolve=1
#启动服务并实现主从同步
mysql> change master to
    -> master_host='10.0.0.8',
    -> master_port=3306,
    -> master_user='repluser',
    -> master_password='123456',
    -> master_log_file='master-bin.000002',master_log_pos=156;
Query OK, 0 rows affected, 2 warnings (0.05 sec)
mysql> start slave;

实现基于key验证

[root@mha-manager ~]#ssh-keygen
[root@mha-manager ~]#ssh-copy-id 10.0.0.7
[root@mha-manager ~]#rsync -av .ssh 10.0.0.8:/root/
[root@mha-manager ~]#rsync -av .ssh 10.0.0.18:/root/
[root@mha-manager ~]#rsync -av .ssh 10.0.0.28:/root/

在管理节点建立配置文件

[root@mha-manager ~]#mkdir /etc/mastermha/
[root@mha-manager ~]#vim /etc/mastermha/app1.cnf 
[server default]
user=mhauser
password=123456
manager_workdir=/data/mastermha/app1/
manager_log=/data/mastermha/app1/manager.log
remote_workdir=/data/mastermha/app1/
ssh_user=root
repl_user=repluser
repl_password=123456
ping_interval=1
master_ip_failover_script=/usr/local/bin/master_ip_failover
check_repl_delay=0
master_binlog_dir=/var/lib/mysql/
[server1]
hostname=10.0.0.8
candidate_master=1
[server2]
hostname=10.0.0.18
candidate_master=1
[server3]
hostname=10.0.0.28                

验证MHA环境

#检查环境
[root@mha-manager ~]#masterha_check_ssh --conf=/etc/mastermha/app1.cnf
[root@mha-manager ~]#masterha_check_repl --conf=/etc/mastermha/app1.cnf
#查看状态
[root@mha-manager ~]#masterha_check_status --conf=/etc/mastermha/app1.cnf 
#启动mha
[root@mha ~]#masterha_manager --conf=/etc/mastermha/app1.cnf
Sun Oct 18 16:41:45 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sun Oct 18 16:41:45 2020 - [info] Reading application default configuration from /etc/mastermha/app1.cnf..
Sun Oct 18 16:41:45 2020 - [info] Reading server configuration from /etc/mastermha/app1.cnf..

测试MHA是否起作用

#关闭master
[root@master ~]#systemctl stop mysqld
#监控mha日志发现slave1自动被提升为新主
[root@mha ~]#tail -f /data/mastermha/app1/manager.log
Sun Oct 18 16:44:38 2020 - [info] New master is 10.0.0.18(10.0.0.18:3306)
Sun Oct 18 16:44:38 2020 - [info] Starting master failover..
Sun Oct 18 16:44:38 2020 - [info] 
From:
10.0.0.8(10.0.0.8:3306) (current master)
 +--10.0.0.18(10.0.0.18:3306)
 +--10.0.0.28(10.0.0.28:3306)

To:
10.0.0.18(10.0.0.18:3306) (new master)
 +--10.0.0.28(10.0.0.28:3306)
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] * Phase 3.4: New Master Diff Log Generation Phase..
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info]  This server has all relay logs. No need to generate diff files from the latest slave.
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] * Phase 3.5: Master Log Apply Phase..
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] *NOTICE: If any error happens from this phase, manual recovery is needed.
Sun Oct 18 16:44:38 2020 - [info] Starting recovery on 10.0.0.18(10.0.0.18:3306)..
Sun Oct 18 16:44:38 2020 - [info]  This server has all relay logs. Waiting all logs to be applied.. 
Sun Oct 18 16:44:38 2020 - [info]   done.
Sun Oct 18 16:44:38 2020 - [info]  All relay logs were successfully applied.
Sun Oct 18 16:44:38 2020 - [info] Getting new master's binlog name and position..
Sun Oct 18 16:44:38 2020 - [info]  slave1-bin.000001:156
Sun Oct 18 16:44:38 2020 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='10.0.0.18', MASTER_PORT=3306, MASTER_LOG_FILE='slave1-bin.000001', MASTER_LOG_POS=156, MASTER_USER='mhauser', MASTER_PASSWORD='xxx';
Sun Oct 18 16:44:38 2020 - [warning] master_ip_failover_script is not set. Skipping taking over new master IP address.
Sun Oct 18 16:44:38 2020 - [info] ** Finished master recovery successfully.
Sun Oct 18 16:44:38 2020 - [info] * Phase 3: Master Recovery Phase completed.
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] * Phase 4: Slaves Recovery Phase..
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] * Phase 4.1: Starting Parallel Slave Diff Log Generation Phase..
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] -- Slave diff file generation on host 10.0.0.28(10.0.0.28:3306) started, pid: 5058. Check tmp log /data/mastermha/app1//10.0.0.28_3306_20201018164435.log if it takes time..
Sun Oct 18 16:44:39 2020 - [info] 
Sun Oct 18 16:44:39 2020 - [info] Log messages from 10.0.0.28 ...
Sun Oct 18 16:44:39 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info]  This server has all relay logs. No need to generate diff files from the latest slave.
Sun Oct 18 16:44:39 2020 - [info] End of log messages from 10.0.0.28.
Sun Oct 18 16:44:39 2020 - [info] -- 10.0.0.28(10.0.0.28:3306) has the latest relay log events.
Sun Oct 18 16:44:39 2020 - [info] Generating relay diff files from the latest slave succeeded.
Sun Oct 18 16:44:39 2020 - [info] 
Sun Oct 18 16:44:39 2020 - [info] * Phase 4.2: Starting Parallel Slave Log Apply Phase..
Sun Oct 18 16:44:39 2020 - [info] 
Sun Oct 18 16:44:39 2020 - [info] -- Slave recovery on host 10.0.0.28(10.0.0.28:3306) started, pid: 5060. Check tmp log /data/mastermha/app1//10.0.0.28_3306_20201018164435.log if it takes time..
Sun Oct 18 16:44:40 2020 - [info] 
Sun Oct 18 16:44:40 2020 - [info] Log messages from 10.0.0.28 ...
Sun Oct 18 16:44:40 2020 - [info] 
Sun Oct 18 16:44:39 2020 - [info] Starting recovery on 10.0.0.28(10.0.0.28:3306)..
Sun Oct 18 16:44:39 2020 - [info]  This server has all relay logs. Waiting all logs to be applied.. 
Sun Oct 18 16:44:39 2020 - [info]   done.
Sun Oct 18 16:44:39 2020 - [info]  All relay logs were successfully applied.
Sun Oct 18 16:44:39 2020 - [info]  Resetting slave 10.0.0.28(10.0.0.28:3306) and starting replication from the new master 10.0.0.18(10.0.0.18:3306)..
Sun Oct 18 16:44:39 2020 - [info]  Executed CHANGE MASTER.
Sun Oct 18 16:44:39 2020 - [info]  Slave started.
Sun Oct 18 16:44:40 2020 - [info] End of log messages from 10.0.0.28.
Sun Oct 18 16:44:40 2020 - [info] -- Slave recovery on host 10.0.0.28(10.0.0.28:3306) succeeded.
Sun Oct 18 16:44:40 2020 - [info] All new slave servers recovered successfully.
Sun Oct 18 16:44:40 2020 - [info] 
Sun Oct 18 16:44:40 2020 - [info] * Phase 5: New master cleanup phase..
Sun Oct 18 16:44:40 2020 - [info] 
Sun Oct 18 16:44:40 2020 - [info] Resetting slave info on the new master..
Sun Oct 18 16:44:40 2020 - [info]  10.0.0.18: Resetting slave info succeeded.
Sun Oct 18 16:44:40 2020 - [info] Master failover to 10.0.0.18(10.0.0.18:3306) completed successfully.
Sun Oct 18 16:44:40 2020 - [info] 
----- Failover Report -----

app1: MySQL Master failover 10.0.0.8(10.0.0.8:3306) to 10.0.0.18(10.0.0.18:3306) succeeded

Master 10.0.0.8(10.0.0.8:3306) is down!

Check MHA Manager logs at mha:/data/mastermha/app1/manager.log for details.

Started automated(non-interactive) failover.
The latest slave 10.0.0.18(10.0.0.18:3306) has all relay logs for recovery.
Selected 10.0.0.18(10.0.0.18:3306) as a new master.
10.0.0.18(10.0.0.18:3306): OK: Applying all logs succeeded.
10.0.0.28(10.0.0.28:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
10.0.0.28(10.0.0.28:3306): OK: Applying all logs succeeded. Slave started, replicating from 10.0.0.18(10.0.0.18:3306)
10.0.0.18(10.0.0.18:3306): Resetting slave info succeeded.
Master failover to 10.0.0.18(10.0.0.18:3306) completed successfully.
#提升新主之后,mha停止
[root@mha ~]#masterha_check_status --conf=/etc/mastermha/app1.cnf
app1 is stopped(2:NOT_RUNNING).

实现vip漂移

#相关脚本
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '10.0.0.100/24';
my $gateway = '10.0.0.254';
my $interface = 'eth0';
my $key = "1";
my $ssh_start_vip = "/sbin/ifconfig $interface:$key $vip;/sbin/arping -I
$interface -c 3 -s $vip $gateway >/dev/null 2>&1";
my $ssh_stop_vip = "/sbin/ifconfig $interface:$key down";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
# $orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If you manage master ip address at global catalog database,
# invalidate orig_master_ip here.
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
# all arguments are passed.
# If you manage master ip address at global catalog database,
# activate new_master_ip here.
# You can also grant write access (create user, set read_only=0, etc) here.
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
`ssh $ssh_user\@$orig_master_host \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --
orig_master_host=host --orig_master_ip=ip --orig_master_port=port --
new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

测试vip漂移

#在master上添加ip
[root@master ~]#ip addr add 10.0.0.100/24 dev eth0
[root@master ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:c1:38:f6 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.8/24 brd 10.0.0.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/24 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fec1:38f6/64 scope link 
       valid_lft forever preferred_lft forever
#检查并启动mha
[root@mha ~]#masterha_check_ssh --conf=/etc/mastermha/app1.cnf
[root@mha ~]#masterha_check_repl --conf=/etc/mastermha/app1.cnf
[root@mha ~]#masterha_manager --conf=/etc/mastermha/app1.cnf
#停止master服务
[root@master ~]#systemctl stop mysqld
# 查看slave1的IP,发现ip成功转移
[root@81 data]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:02:89:3c brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.81/8 brd 10.255.255.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/24 brd 10.0.0.255 scope global ens33:1
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe02:893c/64 scope link 
       valid_lft forever preferred_lft forever

4、实战案例:Percona XtraDB Cluster(PXC 5.7)

(1).环境准备(四台centos7系统主机)

pxc1:10.0.0.7
pxc2:10.0.0.17
pxc3:10.0.0.27
pxc4:10.0.0.37

​ 关闭防火墙和selinux,并保证时间同步

(2).安装percona xtraDB Cluster5.7

#配置yum源(选用清华大学源)
cat /etc/yum.repos.d/pxc.repo
[percona]
name=percona_repo
baseurl=https://mirrors.tuna.tsinghua.edu.cn/percona/release/$releasever/RPMS/$basearch
enabled = 1
gpgcheck = 0
#每台机器上都需要配置yum源
[root@pxc1 ~]#scp /etc/yum.repos.d/pxc.repo 10.0.0.17:/etc/yum.repos.d
[root@pxc1 ~]#scp /etc/yum.repos.d/pxc.repo 10.0.0.27:/etc/yum.repos.d
#在三个节点都安装好PXC 5.7
[root@pxc1 ~]#yum install Percona-XtraDB-Cluster-57 -y
[root@pxc2 ~]#yum install Percona-XtraDB-Cluster-57 -y
[root@pxc3 ~]#yum install Percona-XtraDB-Cluster-57 -y

(3).在各个节点上分别配置mysql及集群配置文件

#修改各个节点的server-id,其它配置无需变动
[root@pxc1 ~]#vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf 
server-id=1
[root@pxc2 ~]#vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf 
server-id=2
[root@pxc3 ~]#vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf 
server-id=3
#各个节点修改pxc配置文件如下
#pxc1
[root@pxc1 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf 
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27  
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.7        
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-1      
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:s3cretPass"
#pxc2
[root@pxc1 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf 
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27  
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.17        
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-2      
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:s3cretPass"
#pxc3
[root@pxc1 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf 
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27  
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.27        
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-3      
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:123456"

(4).启动pxc集群中的第一个节点

[root@pxc1 ~]#systemctl start mysql@bootstrap.service
[root@pxc1 ~]#ss -nutl
Netid State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
tcp   LISTEN     0      128            *:22                         *:*                  
tcp   LISTEN     0      128            *:4567                       *:*                  
tcp   LISTEN     0      128         [::]:22                      [::]:*                  
tcp   LISTEN     0      80          [::]:3306                    [::]:*               #查看root密码
[root@pxc1 ~]#grep "temporary password" /var/log/mysqld.log 
2020-10-18T04:14:33.485069Z 1 [Note] A temporary password is generated for root@localhost: GmQis8f.VAzo
#登录数据库修改密码,并创建和授权用户
[root@pxc1 ~]#mysql -uroot -p'GmQis8f.VAzo'
mysql> alter user 'root'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.01 sec)

mysql> create user 'sstuser'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

mysql> grant reload,lock tables,process,replication client on *.* to 'sstuser'@'localhost';
Query OK, 0 rows affected (0.01 sec)
#查看相关变量和状态
mysql> show variables like 'wsrep%'\G
mysql> show status like 'wsrep%'\G
#重点关注以下内容
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | 72fc337b-10f9-11eb-8052-3ba4e644cbd7 |
| ...                        | ...                                  |
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
| ...                        | ...                                  |
| wsrep_cluster_size         | 1                                    |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
| ...                        | ...                                  |
| wsrep_ready                | ON                                   |
+----------------------------+--------------------------------------+

(5).启动pxc集群中的其它所有节点

[root@pxc2 ~]#systemctl start mysql
[root@pxc3 ~]#systemctl start mysql

(6).查看集群状态,验证集群是否成功

#在任意节点,查看集群状态
[root@pxc1 ~]#mysql -uroot -p123456
mysql> show variables like 'wsrep_node_name';
+-----------------+--------------------+
| Variable_name   | Value              |
+-----------------+--------------------+
| wsrep_node_name | pxc-cluster-node-1 |
+-----------------+--------------------+
1 row in set (0.01 sec)

mysql> show variables like 'wsrep_node_address';
+--------------------+----------+
| Variable_name      | Value    |
+--------------------+----------+
| wsrep_node_address | 10.0.0.7 |
+--------------------+----------+
1 row in set (0.00 sec)

mysql> show variables like 'wsrep_on';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wsrep_on      | ON    |
+---------------+-------+
1 row in set (0.01 sec)

mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+
1 row in set (0.00 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

mysql> create database testdb1;
Query OK, 1 row affected (0.00 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| testdb1            |
+--------------------+
5 rows in set (0.00 sec)
#利用Xshell软件,同时在三个节点数据库,在其中一个节点成功
mysql> create database testdb2;
Query OK, 1 row affected (0.01 sec)
#在其它节点都提示失败
mysql> create database testdb2;
ERROR 1007 (HY000): Can't create database 'testdb2'; database exists

(7).在pxc集群中再加一台新的主机pxc4:10.0.0.37

[root@pxc4 ~]#yum install Percona-XtraDB-Cluster-57 -y
[root@pxc4 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27,10.0.0.37
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.37
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-4
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:123456"
#其它节点中添加10.0.0.37节点ip
[root@pxc4 ~]#vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf
server-id=4
#启动服务
[root@pxc4 ~]#systemctl start mysql
#查看节点状态
mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 4     |
+--------------------+-------+
1 row in set (0.00 sec)

(8).在pxc集群中修复故障节点

#停止任意节点
[root@pxc4 ~]#systemctl stop mysql
mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+
1 row in set (0.00 sec)
#此时在任意节点增加新数据
mysql> create database testdb3;
Query OK, 1 row affected (0.01 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| testdb1            |
| testdb2            |
| testdb3            |
+--------------------+
7 rows in set (0.00 sec)
#恢复启动刚才关掉的节点,数据同步
[root@pxc4 ~]#mysql -uroot -p123456
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| testdb1            |
| testdb2            |
| testdb3            |
+--------------------+
7 rows in set (0.00 sec)
mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 4     |
+--------------------+-------+
1 row in set (0.01 sec)

5、通过 ansible 部署二进制 mysql 8

​ (1).安装ansible(此方法需要有epel源)

yum -y install ansible

​ (2).准备相关文件

#创建ansible文件夹用于存放以下文件,方便管理
[root@ansible ~]#mkdir /data/ansible/files
#下载mysql8.0包到files中
[root@ansible ~]#wget https://dev.mysql.com/get/Downloads/MySQL-8.0/mysql-8.0.21-linux-glibc2.12-x86_64.tar.xz
#准备数据库配置文件/etc/my.cnf
[root@ansible ~]#cat /data/ansible/files/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
log-error=/data/mysql/mysqld.log
log-bin
[client]
port=3306
socket=/tmp/mysql.sock
#准备修改安装好数据库的初始密码的脚本文件
[root@ansible ~]#cat /data/ansible/files/chpass.sh
#!/bin/bash
PASSWORD=`awk '/temporary password/{print $NF}' /data/mysql/mysqld.log`
        mysqladmin  -uroot -p$PASSWORD password 123456     
#所需文件如下
[root@ansible ~]#tree /data/ansible/
/data/ansible/
├─ files
   ├── chpass.sh
   ├── my.cnf
   └── mysql-8.0.21-linux-glibc2.12-x86_64.tar.xz
#在/etc/ansible/hosts中添加所要部署的主机
[root@ansible ~]#vim /etc/ansible/hosts
[dbservers]
10.0.0.18    ansible_connection=ssh   ansible_user=root  ansible_password=123

(3).准备playbook文件

[root@ansible ~]#cat /data/ansible/install_mysql.yml
---
#install mysql8.0
- hosts: dbservers
  remote_user: root
  gather_facts: no

  tasks:
    - name: install packages
      yum: name=libaio,ncurses-compat-libs
    - name: create mysql group
      group: name=mysql gid=306
    - name: create mysql user
      user: name=mysql uid=306 group=mysql shell=/sbin/nologin system=yes create_home=no home=/data/mysql
    - name: config my.cnf
      copy: src=/data/ansible/files/my.cnf dest=/etc/my.cnf
    - name: copy tar to remote host and file mode
      unarchive: src=/data/ansible/files/mysql-8.0.21-linux-glibc2.12-x86_64.tar.xz dest=/usr/local/ owner=root group=root
    - name: create linkfile /usr/local/mysql
      file: src=/usr/local/mysql-8.0.21-linux-glibc2.12-x86_64 dest=/usr/local/mysql state=link
    - name: data dir
      shell: chdir=/usr/local/mysql/ ./bin/mysqld --initialize --datadir=/data/mysql --user=mysql
      tags: data
    - name: service script
      shell: /bin/cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
    - name: enable service
      shell: /etc/init.d/mysqld start;chkconfig --add mysqld;chkconfig mysqld on
      tags: service
    - name: PATH variable
      copy: content='PATH=/usr/local/mysql/bin:$PATH' dest=/etc/profile.d/mysql.sh
    - name: usefullpath
      shell: source /etc/profile.d/mysql.sh
    - name: change password
      script: /data/ansible/files/chpass.sh

(4).用playbook安装mysql

[root@ansible ~]#ansible-playbook /data/ansible/install_mysql.yml 

自动部署完毕,mysql初始密码为:123456

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值