Mysql+MGR+ShardingSphere-Proxy,实现Mysql高可用

1、MySQL安装

说明:
1.本文使用的是Mysql8.0.26版本去安装;
2.MGR搭建的模式为单主模式

1.1MySQL安装步骤

1、使用系统的root账户,切换到待安装目录下,如/data,或者/opt

2.下载mysql

 [root@server01 opt]# wget https://dev.mysql.com/get/Downloads/MySQL-8.0/mysql-8.0.26-linux-glibc2.12-x86_64.tar.xz

​ 也可以直接在官方下载最新版本 官方下载 选择linux

3.解压mysql

[root@server01 opt]# tar xvJf mysql-8.0.26-linux-glibc2.12-x86_64.tar.xz

4在这里插入图片描述
.重命名文件夹

[root@server01 opt]# mv mysql-8.0.26-linux-glibc2.12-x86_64 mysql-8.0

5.在mysql-8.0目录下,创建data文件夹 存储文件

[root@server01 opt]# cd mysql-8.0
[root@server01 mysql-8.0]# mkdir data

6.创建用户组以及用户和密码

[root@server01 mysql-8.0]# groupadd mysql
[root@server01 mysql-8.0]# useradd -g mysql mysql

7.授权用户

[root@server01 mysql-8.0]# chown -R mysql:mysql /opt/mysql-8.0/

8.切换到bin目录下

[root@server01 mysql-8.0]# cd bin/

9.初始化基础信息

[root@server01 bin]# ./mysqld --user=mysql --basedir=/opt/mysql-8.0 --datadir=/opt/mysql-8.0/data/ --initialize

./mysqld: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory

如果出现上述错误,则需要执行命令,然后再重新执行初始化基础信息的命令,如果未出现出错,直接跳过

[root@server01 bin]# yum -y install libaio

得到临时密码,并将密码记录下来
在这里插入图片描述

10.编辑my.cnf文件

[root@server01 bin]# vi /etc/my.cnf

注释mysqld_safe 修改信息

basedir=/opt/mysql-8.0
datadir=/opt/mysql-8.0/data
socket=/tmp/mysql.sock
character-set-server=UTF8MB4
## 以下是相关参数调优,请按照各自需求进行调整
tmp_table_size = 256M
max_heap_table_size = 256M
max_connect_errors = 100
max_connections=2500
table_open_cache=2048

如图所示

在这里插入图片描述

11.添加mysqld服务到系统

[root@server01 mysql-8.0]# cp -a ./support-files/mysql.server /etc/init.d/mysql

12.授权以及添加服务

[root@server01  mysql-8.0]# chmod +x /etc/init.d/mysql
[root@server01 mysql-8.0]# chkconfig --add mysql

13.启动mysql

[root@server01 mysql-8.0]# service mysql start

在这里插入图片描述
如上所示,说明Mysql启动成功

14.查看启动状态 确认

[root@server01 mysql-8.0]# service mysql status

在这里插入图片描述

15.将mysql命令添加到服务

[root@server01 mysql-8.0]# ln -s /opt/mysql-8.0/bin/mysql /usr/bin

16.登录mysql mysql -uroot -p 密码使用之前随机生成的密码

在这里插入图片描述

17.修改root密码 其中123456是新的密码自己设置

mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '123456';

注意:为了系统安全,请不要使用简易密码,存在安全风险

18.执行 使密码生效

mysql> flush privileges;

19.选择mysql数据库

mysql> use mysql;

20.修改远程连接并生效

mysql> update user set host='%' where user='root';
mysql> flush privileges;

1.2可能出现的问题

1、在初始化基础信息步骤中,可能有报错,如下所示:

mysqld: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory

解决方案:根据提示没有这个共享库:libaio

执行如下命令:

yum -y install libaio

2、启动mysql时,如果报如下错误:

[root@localhost mysql]# service mysql restart  
Starting MySQL...[ERROR] The server quit without updating PID file

解决方案:

chown -R mysql:mysql /opt/mysql-8.0/data/
chmod -R 755 /opt/mysql-8.0/data/

其中:/opt/mysql-8.0/data/为mysql安装目录

2、MGR单主集群搭建

官方文档:https://dev.mysql.com/doc/refman/8.0/en/group-replication.html

2.1MGR单主集群搭建步骤

2.1.1准备环境

1、修改各个服务器的hostName

主机名HostName映射
主机1_IPserver01
主机2_IPserver02
主机3_IPserver03
vi /etc/hostname

2、IP与HostName映射的绑定关系

vi /etc/hosts

添加配置信息如下:

主机1_IP   server01
主机2_IP   server02
主机3_IP   server03

注意:修改hostName,是为了方便后续的配置,具体名称可根据实际情况,取一些有意义的名称

3、重启服务器,使配置生效

init 6

注意:此步骤谨慎操作,操作前请确认当前服务器上是否有应用在使用

4、环境确认

mysql> show master status;
+---------------+----------+--------------+------------------+-------------------+
| File          | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------+----------+--------------+------------------+-------------------+
| binlog.000001 |      152 |              |                  |                   |
+---------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

5、参数准备

vi /etc/my.cnf
# server_id:不能与其他MySQL实例重复
server_id=2
# 开启全局事务。将此选项设置要求 enforce-gtid-consistency设置为ON
gtid_mode=ON
# 强制GTID的一致性
enforce_gtid_consistency=ON
# 禁用二进制日志校验
binlog_checksum=NONE

log_bin=binlog
# 设置从主服务器接受的更新是否写入二进制日志中
log_slave_updates=ON
# 以行格式记录
binlog_format=ROW
# 将master_info元数据保存在系统表中
master_info_repository=TABLE
# 将relay_info元数据保存在系统表中
relay_log_info_repository=TABLE
relay-log-recovery=1

# 此参数是在server收集写集合的同时以便将其记录到二进制日志。写集合基于每行的主键,并且是行更改后的唯一标识此标识将用于检测冲突。
transaction_write_set_extraction=XXHASH64
# 组的名字可以随便起,但不能用主机的GTID! 所有节点的这个组名必须保持一致!
loose-group_replication_group_name="aadaaaaa-adda-adda-aaaa-aaaaaaddaaaa"
# 启动mysql时不自动启动组复制
loose-group_replication_start_on_boot=OFF
# 本机IP地址或者映射,33061用于接收来自其他组成员的传入连接
loose-group_replication_local_address= "#本机IP:33061"
# 当前主机成员需要加入组时,Server先访问这些种子成员中的一个,然后它请求重新配置以允许它加入组
# 需要注意的是,此参数不需要列出所有组成员,只需列出当前节点加入组需要访问的节点即可。
loose-group_replication_group_seeds= "#主机1_IP:33061,#主机2_IP:33061,#主机3_IP:33061"
# 是否自动引导组。此选项只能在一个server实例上使用,通常是首次引导组时(或在整组成员关闭的情况下),如果多次引导,可能出现脑裂。
loose-group_replication_bootstrap_group=OFF

除server_id、group_replication_local_address参数以外,三台机器保持完全一致即可

6、重启mysql

[root@server01 mysql-8.0]# service mysql restart

2.1.2创建组复制账号并在线加载MGR插件

三台机器操作一致如下

创建组复制账号

mysql> set sql_log_bin = 0;
Query OK, 0 rows affected (0.00 sec)

mysql> create user rpl_user identified with 'mysql_native_password' by 'rpl_123';
Query OK, 0 rows affected (0.02 sec)

mysql> grant replication slave, backup_admin on *.* to rpl_user;
Query OK, 0 rows affected (0.01 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> set sql_log_bin = 1;
Query OK, 0 rows affected (0.00 sec)

mysql> change master to master_user='rpl_user', master_password='rpl_123' for channel 'group_replication_recovery';
Query OK, 0 rows affected, 5 warnings (0.05 sec)

上述命令中的复制账号rpl_user,密码rpl_123,可以根据自己喜好设置,但是需要保证各节点复制账号保持一致

2.1.3安装MGR插件

三台机器操作一致如下

mysql> install plugin group_replication soname 'group_replication.so';
Query OK, 0 rows affected (0.02 sec)
mysql> show plugins;
+---------------------------------+----------+--------------------+----------------------+---------+
| Name                            | Status   | Type               | Library              | License |
+---------------------------------+----------+--------------------+----------------------+---------+
| binlog                          | ACTIVE   | STORAGE ENGINE     | NULL                 | GPL     |
| mysql_native_password           | ACTIVE   | AUTHENTICATION     | NULL                 | GPL     |
| sha256_password                 | ACTIVE   | AUTHENTICATION     | NULL                 | GPL     |
| caching_sha2_password           | ACTIVE   | AUTHENTICATION     | NULL                 | GPL     |
| sha2_cache_cleaner              | ACTIVE   | AUDIT              | NULL                 | GPL     |
| daemon_keyring_proxy_plugin     | ACTIVE   | DAEMON             | NULL                 | GPL     |
| CSV                             | ACTIVE   | STORAGE ENGINE     | NULL                 | GPL     |
| MEMORY                          | ACTIVE   | STORAGE ENGINE     | NULL                 | GPL     |
| InnoDB                          | ACTIVE   | STORAGE ENGINE     | NULL                 | GPL     |
| ...
| BLACKHOLE                       | ACTIVE   | STORAGE ENGINE     | NULL                 | GPL     |
| FEDERATED                       | DISABLED | STORAGE ENGINE     | NULL                 | GPL     |
| ngram                           | ACTIVE   | FTPARSER           | NULL                 | GPL     |
| mysqlx_cache_cleaner            | ACTIVE   | AUDIT              | NULL                 | GPL     |
| mysqlx                          | ACTIVE   | DAEMON             | NULL                 | GPL     |
| group_replication               | ACTIVE   | GROUP REPLICATION  | group_replication.so | GPL     |
+---------------------------------+----------+--------------------+----------------------+---------+
46 rows in set (0.01 sec)

2.1.4搭建集群

主节点操作

选择一台机器作为主节点

mysql> show master status;
+---------------+----------+--------------+------------------+-------------------+
| File          | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------+----------+--------------+------------------+-------------------+
| binlog.000001 |      152 |              |                  |                   |
+---------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

mysql> set global group_replication_bootstrap_group = 1;
Query OK, 0 rows affected (0.00 sec)

mysql> start group_replication;
Query OK, 0 rows affected (2.21 sec)

mysql> set global group_replication_bootstrap_group = 0;
Query OK, 0 rows affected (0.00 sec)

mysql> show master status;
+---------------+----------+--------------+------------------+----------------------------------------+
| File          | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                      |
+---------------+----------+--------------+------------------+----------------------------------------+
| binlog.000001 |      463 |              |                  | aadaaaaa-adda-adda-aaaa-aaaaaaddaaaa:1 |
+---------------+----------+--------------+------------------+----------------------------------------+
1 row in set (0.00 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+---
| group_replication_applier | efb22439-d41d-11ed-89bd-00163e0a040f | server02    |        3306 | ONLINE       | PRIMARY     | 8.0.26         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+

看到此时server02的成员角色primary,状态时onlinegtid为1

slave节点操作

slave1上操作如下:

mysql> show master status;
+---------------+----------+--------------+------------------+-------------------+
| File          | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------+----------+--------------+------------------+-------------------+
| binlog.000001 |      152 |              |                  |                   |
+---------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

mysql> start group_replication;
Query OK, 0 rows affected (3.35 sec)

mysql> show master status;
+---------------+----------+--------------+------------------+------------------------------------------+
| File          | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                        |
+---------------+----------+--------------+------------------+------------------------------------------+
| binlog.000001 |      814 |              |                  | aadaaaaa-adda-adda-aaaa-aaaaaaddaaaa:1-2 |
+---------------+----------+--------------+------------------+------------------------------------------+
1 row in set (0.00 sec)

看到gtid又+1了

查看节点情况:

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | a79ecc9f-d41f-11ed-b0f9-00163e126f06 | server01    |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
| group_replication_applier | efb22439-d41d-11ed-89bd-00163e0a040f | server02    |        3306 | ONLINE       | PRIMARY     | 8.0.26         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)

此时两个节点,server02primaryserver01secondary,两个节点都是online状态

再加一个节点,slave2操作同上

mysql>  show master status;
+---------------+----------+--------------+------------------+-------------------+
| File          | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------+----------+--------------+------------------+-------------------+
| binlog.000001 |      152 |              |                  |                   |
+---------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

mysql> start group_replication;
Query OK, 0 rows affected (2.40 sec)

mysql> show master status;
+---------------+----------+--------------+------------------+------------------------------------------+
| File          | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                        |
+---------------+----------+--------------+------------------+------------------------------------------+
| binlog.000001 |     1165 |              |                  | aadaaaaa-adda-adda-aaaa-aaaaaaddaaaa:1-3 |
+---------------+----------+--------------+------------------+------------------------------------------+
1 row in set (0.00 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST  | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | a79ecc9f-d41f-11ed-b0f9-00163e126f06 | server01     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
| group_replication_applier | ba91cbfd-d420-11ed-a466-00163e0abb33 | server03     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
| group_replication_applier | efb22439-d41d-11ed-89bd-00163e0a040f | server02     |        3306 | ONLINE       | PRIMARY     | 8.0.26         |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
3 rows in set (0.01 sec)

MGR集群搭建完成

2.2验证

2.2.1数据同步验证

主节点操作
mysql> CREATE DATABASE test;
Query OK, 1 row affected (0.03 sec)

mysql> USE test;
Database changed
mysql> CREATE TABLE user_test (`id` INT(11) PRIMARY KEY, `name` varchar(256) NOT NULL);
Query OK, 0 rows affected, 1 warning (0.06 sec)

mysql> INSERT INTO user_test VALUES (1, 'test_name1');
Query OK, 1 row affected (0.02 sec)

mysql> select * from user_test;
+----+------------+
| id | name       |
+----+------------+
|  1 | test_name1 |
+----+------------+
1 row in set (0.00 sec)
slave节点验证查看
mysql> use test;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from user_test;
+----+------------+
| id | name       |
+----+------------+
|  1 | test_name1 |
+----+------------+
1 row in set (0.00 sec)

2.2.2节点宕机验证

slave节点宕机验证

step1:选取某个slave点,进行宕机模拟操作

mysql> stop group_replication;
Query OK, 0 rows affected (4.52 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST  | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | ba91cbfd-d420-11ed-a466-00163e0abb33 | server03     |        3306 | OFFLINE      |             |                |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
1 row in set (0.00 sec)

step2:其他节点查看(主节点、从节点查看)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | a79ecc9f-d41f-11ed-b0f9-00163e126f06 | server01    |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
| group_replication_applier | efb22439-d41d-11ed-89bd-00163e0a040f | server02    |        3306 | ONLINE       | PRIMARY     | 8.0.26         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)

step3:slave节点宕机,master节点有数据操作

mysql> INSERT INTO user_test VALUES (2, 'test_name2');
Query OK, 1 row affected (0.03 sec)

mysql> select * from user_test;
+----+------------+
| id | name       |
+----+------------+
|  1 | test_name1 |
|  2 | test_name2 |
+----+------------+
2 rows in set (0.00 sec)

step4:此时另一个在线slave节点查看

mysql> select * from user_test;
+----+------------+
| id | name       |
+----+------------+
|  1 | test_name1 |
|  2 | test_name2 |
+----+------------+
2 rows in set (0.00 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | a79ecc9f-d41f-11ed-b0f9-00163e126f06 | server01    |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
| group_replication_applier | efb22439-d41d-11ed-89bd-00163e0a040f | server02    |        3306 | ONLINE       | PRIMARY     | 8.0.26         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)

数据同步正常

step5:恢复宕机的节点

mysql> start group_replication;
Query OK, 0 rows affected (3.51 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST  | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | a79ecc9f-d41f-11ed-b0f9-00163e126f06 | server01     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
| group_replication_applier | ba91cbfd-d420-11ed-a466-00163e0abb33 | server03     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
| group_replication_applier | efb22439-d41d-11ed-89bd-00163e0a040f | server02     |        3306 | ONLINE       | PRIMARY     | 8.0.26         |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

mysql> select * from user_test;
+----+------------+
| id | name       |
+----+------------+
|  1 | test_name1 |
|  2 | test_name2 |
+----+------------+
2 rows in set (0.00 sec)

数据节点同步正常

step6:从节点写入

mysql> INSERT INTO user_test VALUES (3, 'test_name3');
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement

报错

master节点宕机验证

step1:master进行宕机模拟操作

mysql> stop group_replication;
Query OK, 0 rows affected (4.09 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | efb22439-d41d-11ed-89bd-00163e0a040f | server02    |        3306 | OFFLINE      |             |                |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
1 row in set (0.00 sec)

step2:其他slave节点查看

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST  | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | a79ecc9f-d41f-11ed-b0f9-00163e126f06 | server01     |        3306 | ONLINE       | PRIMARY     | 8.0.26         |
| group_replication_applier | ba91cbfd-d420-11ed-a466-00163e0abb33 | server03     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)

step3:去新的主节点插入数据

mysql> INSERT INTO user_test VALUES (3, 'test_name3');
Query OK, 1 row affected (0.01 sec)

mysql> select * from user_test;
+----+------------+
| id | name       |
+----+------------+
|  1 | test_name1 |
|  2 | test_name2 |
|  3 | test_name3 |
+----+------------+
3 rows in set (0.00 sec)

step4:去另一个从节点查看

mysql> select * from user_test;
+----+------------+
| id | name       |
+----+------------+
|  1 | test_name1 |
|  2 | test_name2 |
|  3 | test_name3 |
+----+------------+
3 rows in set (0.01 sec)

step5:原来的主节点恢复,会变成从节点

mysql> start group_replication;
Query OK, 0 rows affected (3.47 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST  | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | a79ecc9f-d41f-11ed-b0f9-00163e126f06 | server01     |        3306 | ONLINE       | PRIMARY     | 8.0.26         |
| group_replication_applier | ba91cbfd-d420-11ed-a466-00163e0abb33 | server03     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
| group_replication_applier | efb22439-d41d-11ed-89bd-00163e0a040f | server02     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

查看数据

mysql> select * from user_test;
+----+------------+
| id | name       |
+----+------------+
|  1 | test_name1 |
|  2 | test_name2 |
|  3 | test_name3 |
+----+------------+
3 rows in set (0.00 sec)

数据同步正常

2.3可能出现的问题

问题1

若在查看状态时,已经存在Gtid配置,如下

mysql> show master status;
+---------------+----------+--------------+------------------+------------------------------------------+
| File          | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                        |
+---------------+----------+--------------+------------------+------------------------------------------+
| binlog.000002 |     1464 |              |                  | ba91cbfd-d420-11ed-a466-00163e0abb33:1-4 |
+---------------+----------+--------------+------------------+------------------------------------------+
1 row in set (0.00 sec)

解决方案:重置配置信息

mysql> RESET MASTER;

如果出现如下情况

ERROR 3190 (HY000): RESET MASTER is not allowed because Group Replication is running.

解决方案:先停止group_replication,然后再重置配置信息

mysql> stop group_replication;
Query OK, 0 rows affected (4.33 sec)
mysql> RESET MASTER;
Query OK, 0 rows affected (0.05 sec)

问题2

若主库启动报如下错

Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061'

解决方案:执行如下命令

vi /etc/selinux/config
# 将 SELINUX=enforcing 改为 SELINUX=disabled

问题3

slave节点state值一直为RECOVERING,同时查看日志报如下错:

Authentication plugin 'caching_sha2_password' reported error: Authentication requires secure connection. Error_code: MY-002061

报错原因:由于mysql8.0之后加密规则变成 caching_sha2_password,所以使用MGR方式复制时,需要打开公钥访问。

解决方案:在每个从节点上执行下面命令

mysql> STOP GROUP_REPLICATION;
mysql> SET GLOBAL group_replication_recovery_get_public_key=ON;
mysql> START GROUP_REPLICATION;

问题4

slave节点报错如下信息

Slave SQL for channel 'group_replication_recovery': Error 'Can't create database 'test'; database exists' on query. Default database: 'test'. Query: 'CREATE DATABASE `test` /*!40100 COLLATE 'utf8mb4_general_ci' */', Error_code: MY-001007

解决方案:在slave节点执行如下命令

mysql> STOP GROUP_REPLICATION;
mysql> set global super_read_only=0;
mysql> drop database test;
mysql> set global super_read_only=1;
mysql> START GROUP_REPLICATION;
mysql> SELECT * FROM performance_schema.replication_group_members;

3、ShardingSphere-Proxy搭建和使用

官方文档:https://shardingsphere.apache.org/document/current/cn/overview/

说明:本文章搭建所使用的ShardingSphere-Proxy版本为5.3.2

3.1 安装步骤及相关配置

下载ShardingSphere-Proxy二进制包,地址:

https://www.apache.org/dyn/closer.lua/shardingsphere/5.3.2/apache-shardingsphere-5.3.2-shardingsphere-proxy-bin.tar.gz

0、提前安装JDK

此处步骤省略

1、在安装目录中,解压包

tar zvxf apache-shardingsphere-5.3.2-shardingsphere-proxy-bin.tar.gz

2、修改目录名称

MV apache-shardingsphere-5.3.2-shardingsphere-proxy-bin shardingsphere

3、下载对应的mysql驱动,复制到lib目录下

mysql驱动下载地址:https://dev.mysql.com/downloads/connector/j/

在这里插入图片描述

需要注意的是,ShardingSphere 高可用的实现依赖于分布式治理的能力,所以目前只支持在集群模式下使用

然后分别配置conf目录下的server.yaml、config-sharding.yaml、config-readwrite-splitting.yaml

其中,常见的配置文件有:

配置文件名称文件作用
server.yaml
config-sharding.yaml分库分表配置
config-readwrite-splitting.yaml读写分离配置
config-database-discovery.yaml高可用配置

server.yaml配置:

mode:
 type: Cluster
 repository:
   type: ZooKeeper
   props:
     namespace: governance_ds
     server-lists: ZooKeeper_IP:2181  #ZooKeeper_IP
     retryIntervalMilliseconds: 500
     timeToLiveSeconds: 60
     maxRetries: 3
     operationTimeoutMilliseconds: 500

authority:
 users:
   - user: root@%
     password: root
   - user: sharding
     password: sharding
 privilege:
   type: ALL_PERMITTED

transaction:
 defaultType: LOCAL                      #也支持XA
 providerType: Atomikos

sqlParser:
 sqlCommentParseEnabled: false
 sqlStatementCache:
   initialCapacity: 2000
   maximumSize: 65535
 parseTreeCache:
   initialCapacity: 128
   maximumSize: 1024

props:
  max-connections-size-per-query: 1
  kernel-executor-size: 16  # Infinite by default.
  proxy-frontend-flush-threshold: 128  # The default value is 128.
  sql-show: true                                    #打印sql语句,方便调试,后期稳定需要关闭,节省性能和空间
  proxy.transaction.type: LOCAL                     #默认为LOCAL事务
  proxy.opentracing.enabled: false                  #是否开启链路追踪功能,默认为不开启。
  query.with.cipher.column: true
  check.table.metadata.enabled: true                #是否在启动时检查分表元数据一致性,默认值: false
  overwrite: true
  proxy-mysql-default-version: 8.0.26
  allow.range.query.with.inline.sharding: true
  proxy-hint-enabled: true

启动

到文件夹bin目录下,使用启动脚本进行启动,ShardingSphere-Proxy启动默认端口号是3307,或者启动时指定其他端口号eg:sh start.sh $port

sh start.sh

查看日志

启动成功

说明:因为高可用功能需要ShardingSphere-Proxy集群部署,所以本次部署了两个节点,两个节点的配置完全相同

3.2使用教程

1、navicat客户端连接

用户名密码,是在server.yaml中指定的

navicat中新建连接,端口号是3307,用户名:root,密码:root

连接成功后,查看相关的库

在这里插入图片描述

2、代码连接

spring:
   datasource:
      druid:
          min-idle: 5
          initial-size: 5
          async-init: true
          async-close-connection-enable: true
          max-active: 50
          max-wait: 60000
          time-between-eviction-runs-millis: 60000
          min-evictable-idle-time-millis: 60000
      driver-class-name: com.mysql.cj.jdbc.Driver
      url: jdbc:mysql://IP:3307/sharding_db?useSSL=false&useUnicode=true&characterEncoding=UTF-8&serverTimezone=UTC
      username: root
      password: root

3.3读写分离配置及验证

配置

配置config-readwrite-splitting.yaml
databaseName: master_slave_db

dataSources:
  write_ds:
    url: jdbc:mysql://ip_1:3306/demo_write_ds?serverTimezone=UTC&useSSL=false
    username: root
    password: 123456
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  read_ds_0:
    url: jdbc:mysql://ip_2:3306/demo_write_ds?serverTimezone=UTC&useSSL=false
    username: root
    password: 123456
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  read_ds_1:
    url: jdbc:mysql://ip_3:3306/demo_write_ds?serverTimezone=UTC&useSSL=false
    username: root
    password: 123456
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
rules:
- !READWRITE_SPLITTING
  dataSources:
    readwrite_ds:
      staticStrategy:
        writeDataSourceName: write_ds
        readDataSourceNames:
          - read_ds_0
          - read_ds_1

说明:

  1. databaseName: master_slave_db,指定操作的database,proxy启动后会自动创建相关的database;
  2. 服务器介绍
ShardingSphere的配置服务器别名服务器ip
ds_0server02主机2_IP
ds_1server03主机3_IP
ds_2server01主机1_IP
创建相关表结构

在master_slave_db中,新建相关的表结构

CREATE TABLE `t_order`  (
  `order_id` varchar(100) PRIMARY KEY NOT NULL,
  `name` varchar(100) NOT NULL,
  `order_status` varchar(50) NOT NULL,
  `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP 
);

在这里插入图片描述

验证

新增

在ShardingSphere连接中的master_slave_db库中,插入数据

insert into t_order(order_id, name, order_status) value ('20242379', '订单1', 'SUCCESS');
insert into t_order(order_id, name, order_status) value ('28459059', '订单2', 'INIT');
insert into t_order(order_id, name, order_status) value ('71098474', '订单3', 'FAIL');

在这里插入图片描述

查看其他数据库中数据,正常

删除
DELETE from t_order where order_id = '28459059';

在这里插入图片描述

修改
update  t_order set order_status = 'INIT' where order_id = '71098474' ; 

在这里插入图片描述

查看
SELECT * from t_order where order_id = '71098474';

在这里插入图片描述

3.4分库分表配置及验证

配置

配置config-sharding.yaml
databaseName: sharding_db

dataSources:
  ds_0:
    url: jdbc:mysql://主机2_IP:3306/db_0?serverTimezone=UTC&useSSL=false
    username: root
    password: 123456
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_1:
    url: jdbc:mysql://主机2_IP:3306/db_1?serverTimezone=UTC&useSSL=false
    username: root
    password: 123456
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

rules:
- !SHARDING
  tables:
    t_order:
      actualDataNodes: ds_${0..1}.t_order_${0..1}
      tableStrategy:
        standard:
          shardingColumn: order_id
          shardingAlgorithmName: t_order_inline
      keyGenerateStrategy:
        column: order_id
        keyGeneratorName: snowflake
    t_order_item:
      actualDataNodes: ds_${0..1}.t_order_item_${0..1}
      tableStrategy:
        standard:
          shardingColumn: order_id
          shardingAlgorithmName: t_order_item_inline
      keyGenerateStrategy:
        column: order_item_id
        keyGeneratorName: snowflake
  bindingTables:
    - t_order,t_order_item
  broadcastTables:
    - t_address
  defaultDatabaseStrategy:
    standard:
      shardingColumn: user_id
      shardingAlgorithmName: database_inline
  defaultTableStrategy:
    none:
  
  shardingAlgorithms:
    database_inline:
      type: INLINE
      props:
        algorithm-expression: ds_${user_id % 2}
    t_order_inline:
      type: INLINE
      props:
        algorithm-expression: t_order_${order_id % 2}
    t_order_item_inline:
      type: INLINE
      props:
        algorithm-expression: t_order_item_${order_id % 2}
  
  keyGenerators:
    snowflake:
      type: SNOWFLAKE
      props:
        worker-id: 123
创建相关表结构

在sharding_db中,新建相关的表结构

CREATE TABLE `t_order`  (
  `order_id` varchar(100) PRIMARY KEY NOT NULL,
  `name` varchar(100) NOT NULL,
  `order_status` varchar(50) NOT NULL,
  `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP 
);

在这里插入图片描述

在这里插入图片描述
去其他库中查看相关表结构
在这里插入图片描述

验证

新增
insert into t_order(name, order_status) value ('订单1', 'SUCCESS');
insert into t_order(name, order_status) value ('订单2', 'INIT');
insert into t_order(name, order_status) value ('订单3', 'FAIL');
insert into t_order(name, order_status) value ('订单4', 'SUCCESS');
insert into t_order(name, order_status) value ('订单5', 'INIT');
insert into t_order(name, order_status) value ('订单6', 'FAIL');
insert into t_order(name, order_status) value ('订单7', 'INIT');
insert into t_order(name, order_status) value ('订单8', 'SUCCESS');
insert into t_order(name, order_status) value ('订单9', 'INIT');
insert into t_order(name, order_status) value ('订单10', 'SUCCESS');

在这里插入图片描述

查看数据,分布在不同的表和库中

在这里插入图片描述

同时,查看其他slave中的数据分布

在这里插入图片描述

删除
DELETE from t_order where order_id = 749961206243;

在这里插入图片描述

修改
update  t_order set order_status = 'INIT' where order_id = 446054704155 ;

在这里插入图片描述

查看
SELECT * from t_order where order_id = 446054704155;

在这里插入图片描述

公共表

在config-sharding.yaml配置文件中,broadcastTables配置表

CREATE TABLE `t_address`  (
  `address_id` varchar(100) PRIMARY KEY NOT NULL,
	`address_name` varchar(100) NOT NULL
);

建好表后,db_0和db_1中,都会创建该表结构。

3.5高可用+读写分离相关配置及验证

配置

配置config-database-discovery.yaml
databaseName: database_discovery_db

dataSources:
  ds_0:
    url: jdbc:mysql://server02:3306/demo_primary_ds?serverTimezone=UTC&useSSL=false
    username: root
    password: "!QAZ2wsx.l"
    connectionTimeoutMilliseconds: 3000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_1:
    url: jdbc:mysql://server03:3306/demo_primary_ds?serverTimezone=UTC&useSSL=false
    username: root
    password: "!QAZ2wsx.l"
    connectionTimeoutMilliseconds: 3000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_2:
    url: jdbc:mysql://server01:3306/demo_primary_ds?serverTimezone=UTC&useSSL=false
    username: root
    password: "!QAZ2wsx.l"
    connectionTimeoutMilliseconds: 3000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

rules:
#读写分离相关配置
- !READWRITE_SPLITTING
    dataSources:
      replica_ds:
        dynamicStrategy:
          autoAwareDataSourceName: readwrite_ds
#高可用相关配置
- !DB_DISCOVERY
 dataSources:
   readwrite_ds:
     dataSourceNames:
       - ds_0
       - ds_1
       - ds_2
     discoveryHeartbeatName: mgr_heartbeat
     discoveryTypeName: mgr
 discoveryHeartbeats:
   mgr_heartbeat:
     props:
       keep-alive-cron: '0/5 * * * * ?'
 discoveryTypes:
   mgr:
     type: MySQL.MGR
     props:
       group-name: aadaaaaa-adda-adda-aaaa-aaaaaaddaaaa

说明:

  1. databaseName: database_discovery_db,指定操作的database,proxy启动后会自动创建相关的database;

  2. url:对应的是MySql中的performance_schema.replication_group_members MEMBER_HOST和MEMBER_PORT,相关查询方式为:

    mysql>  select * from performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
    | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST  | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
    +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
    | group_replication_applier | a79ecc9f-d41f-11ed-b0f9-00163e126f06 | server01     |        3306 | ONLINE       | PRIMARY     | 8.0.26         |
    | group_replication_applier | ba91cbfd-d420-11ed-a466-00163e0abb33 | server03     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
    | group_replication_applier | efb22439-d41d-11ed-89bd-00163e0a040f | server02     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
    +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
    
  3. group-name: aadaaaaa-adda-adda-aaaa-aaaaaaddaaaa,对应的是/etc/my.cnf中配置的loose-group_replication_group_name,同时可以通过如下命令进行查询:

mysql> SELECT * FROM performance_schema.global_variables WHERE VARIABLE_NAME='group_replication_group_name';
+------------------------------+--------------------------------------+
| VARIABLE_NAME                | VARIABLE_VALUE                       |
+------------------------------+--------------------------------------+
| group_replication_group_name | aadaaaaa-adda-adda-aaaa-aaaaaaddaaaa |
+------------------------------+--------------------------------------+
  1. 服务器介绍
ShardingSphere的配置服务器别名服务器ip
ds_0server02主机2_IP
ds_1server03主机3_IP
ds_2server01主机1_IP
创建相关表结构

在database_discovery_db中,新建相关表结构

DROP TABLE IF EXISTS `t_order`;
CREATE TABLE `t_order`  (
  `order_id` varchar(100) PRIMARY KEY NOT NULL,
  `user_id` varchar(100) NOT NULL,
  `name` varchar(100) NOT NULL,
  `order_status` varchar(50) NOT NULL,
  `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP 
);

同步去观察,相对应的3个库中所对应的database中,是否有t_order

MGR各节点状态介绍

在这里插入图片描述

server03为master节点,proxy中对应配置的是:ds_1

server01、server02 为slave节点,proxy中对应配置的是:ds_2、ds_0

验证

验证步骤介绍:

step1、按照正常情况,MGR共3个节点,1个master,2个slave节点,并进行正常的增加、删除、修改、查询等操作,并观察实际操作的数据库,新增、修改、删除操作,要在master节点上进行,查询在slave节点进行,多次查询,以轮询的方式查询2个slave节点;

step2、模拟master节点宕机,此时会产生新的master,目前节点是1个master,1个slave,然后进行增加、删除、修改、查询等操作,并观察实际操作的数据库,新增、修改、删除操作,要在新的master节点上进行,查询在slave节点进行;

step3、原master节点恢复,此时该节点会变成新的slave节点,目前是1个master,2个slave,,然后进行增加、删除、修改、查询等操作,并观察实际操作的数据库,新增、修改、删除操作,要在新的master节点上进行,查询在slave节点进行,多次查询,以轮询的方式查询2个slave节点;;

step4、模拟slave节点宕机,目前节点是1个master,1个slave,然后进行增加、删除、修改、查询等操作,并观察实际操作的数据库

step5、恢复slave节点,目前是1个master,2个slave,,然后进行增加、删除、修改、查询等操作

正常情况下操作
新增

在这里插入图片描述

通过navicat去3个库中查看,数据是否正常插入

查询

查询多次,以轮询的方式,查询slave节点

在这里插入图片描述

修改

在这里插入图片描述

删除

在这里插入图片描述

主节点宕机

此时MGR各节点状态

在这里插入图片描述

节点简单介绍:

server01为master节点,proxy中对应配置的是:ds_2

server02 为slave节点,proxy中对应配置的是:ds_0

新增

在这里插入图片描述

查询

在这里插入图片描述

修改

在这里插入图片描述

主节点恢复-新增

在这里插入图片描述

节点简单介绍:

server01为master节点,proxy中对应配置的是:ds_2

server02、server03 为slave节点,proxy中对应配置的是:ds_0ds_1

在这里插入图片描述

主节点恢复-修改

在这里插入图片描述

主节点恢复-查询

在这里插入图片描述

从节点宕机

模拟某一个从节点宕机,此时MGR集群状态如下:

在这里插入图片描述

节点简单介绍:

server01为master节点,proxy中对应配置的是:ds_2

server02为slave节点,proxy中对应配置的是:ds_0

slave节点 server03 宕机,proxy中对应配置的是:ds_1

新增

在这里插入图片描述

查询

查询多次,均可以正常查询,并且只路由到ds_0节点上

在这里插入图片描述

修改

在这里插入图片描述

删除

在这里插入图片描述

从节点恢复-新增

节点简单介绍:

server01为master节点,proxy中对应配置的是:ds_2

server02、server03 为slave节点,proxy中对应配置的是:ds_0ds_1

在这里插入图片描述

3.6高可用+读写分离+分库分别相关配置及验证

配置

配置config-database-discovery.yaml
databaseName: database_discovery_db

dataSources:
  ds_0:
    url: jdbc:mysql://server02:3306/demo_primary_ds_0?serverTimezone=UTC&useSSL=false
    username: root
    password: "!QAZ2wsx.l"
    connectionTimeoutMilliseconds: 3000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_1:
    url: jdbc:mysql://server03:3306/demo_primary_ds_0?serverTimezone=UTC&useSSL=false
    username: root
    password: "!QAZ2wsx.l"
    connectionTimeoutMilliseconds: 3000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_2:
    url: jdbc:mysql://server01:3306/demo_primary_ds_0?serverTimezone=UTC&useSSL=false
    username: root
    password: "!QAZ2wsx.l"
    connectionTimeoutMilliseconds: 3000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_3:
    url: jdbc:mysql://server02:3306/demo_primary_ds_1?serverTimezone=UTC&useSSL=false
    username: root
    password: "!QAZ2wsx.l"
    connectionTimeoutMilliseconds: 3000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_4:
    url: jdbc:mysql://server03:3306/demo_primary_ds_1?serverTimezone=UTC&useSSL=false
    username: root
    password: "!QAZ2wsx.l"
    connectionTimeoutMilliseconds: 3000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_5:
    url: jdbc:mysql://server01:3306/demo_primary_ds_1?serverTimezone=UTC&useSSL=false
    username: root
    password: "!QAZ2wsx.l"
    connectionTimeoutMilliseconds: 3000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

rules:
- !SHARDING
  tables:
      t_order:
        actualDataNodes: replica_ds_${0..1}.t_order_${0..1}
        tableStrategy:
          standard:
            shardingColumn: order_id
            shardingAlgorithmName: t_order_inline
        keyGenerateStrategy:
          column: order_id
          keyGeneratorName: snowflake
  defaultDatabaseStrategy:
    standard:
      shardingColumn: user_id
      shardingAlgorithmName: database_inline
  shardingAlgorithms:
    database_inline:
      type: INLINE
      props:
        algorithm-expression: replica_ds_${user_id % 2}
    t_order_inline:
      type: INLINE
      props:
        algorithm-expression: t_order_${order_id % 2}
    t_order_item_inline:
      type: INLINE
      props:
        algorithm-expression: t_order_item_${order_id % 2}
  keyGenerators:
    snowflake:
      type: SNOWFLAKE
        
- !READWRITE_SPLITTING
    dataSources:
      replica_ds_0:
        dynamicStrategy:
          autoAwareDataSourceName: readwrite_ds_0
      replica_ds_1:
        dynamicStrategy:
          autoAwareDataSourceName: readwrite_ds_1
- !DB_DISCOVERY
 dataSources:
      readwrite_ds_0:
        dataSourceNames:
          - ds_0
          - ds_1
          - ds_2
        discoveryHeartbeatName: mgr_heartbeat
        discoveryTypeName: mgr
      readwrite_ds_1:
        dataSourceNames:
          - ds_3
          - ds_4
          - ds_5
        discoveryHeartbeatName: mgr_heartbeat
        discoveryTypeName: mgr
 discoveryHeartbeats:
   mgr_heartbeat:
     props:
       keep-alive-cron: '0/5 * * * * ?'
 discoveryTypes:
   mgr:
     type: MySQL.MGR
     props:
       group-name: aadaaaaa-adda-adda-aaaa-aaaaaaddaaaa

说明:

  1. databaseName: database_discovery_db,指定操作的database,proxy启动后会自动创建相关的database;

  2. url:对应的是MySql中的performance_schema.replication_group_members MEMBER_HOST和MEMBER_PORT,相关查询方式为:

    mysql>  select * from performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
    | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST  | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
    +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
    | group_replication_applier | a79ecc9f-d41f-11ed-b0f9-00163e126f06 | server01     |        3306 | ONLINE       | PRIMARY     | 8.0.26         |
    | group_replication_applier | ba91cbfd-d420-11ed-a466-00163e0abb33 | server03     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
    | group_replication_applier | efb22439-d41d-11ed-89bd-00163e0a040f | server02     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
    +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
    
  3. group-name: aadaaaaa-adda-adda-aaaa-aaaaaaddaaaa,对应的是/etc/my.cnf中配置的loose-group_replication_group_name,同时可以通过如下命令进行查询:

mysql> SELECT * FROM performance_schema.global_variables WHERE VARIABLE_NAME='group_replication_group_name';
+------------------------------+--------------------------------------+
| VARIABLE_NAME                | VARIABLE_VALUE                       |
+------------------------------+--------------------------------------+
| group_replication_group_name | aadaaaaa-adda-adda-aaaa-aaaaaaddaaaa |
+------------------------------+--------------------------------------+
  1. 服务器介绍
ShardingSphere的配置服务器别名服务器ipdatabase
ds_0server02主机2_IPdemo_primary_ds_0
ds_1server03主机3_IPdemo_primary_ds_0
ds_2server01主机1_IPdemo_primary_ds_0
ds_3server02主机2_IPdemo_primary_ds_1
ds_4server03主机3_IPdemo_primary_ds_1
ds_5server01主机1_IPdemo_primary_ds_1
创建相关表结构

在database_discovery_db中,新建相关表结构

DROP TABLE IF EXISTS `t_order`;
CREATE TABLE `t_order`  (
  `order_id` varchar(100) PRIMARY KEY NOT NULL,
  `user_id` varchar(100) NOT NULL,
  `name` varchar(100) NOT NULL,
  `order_status` varchar(50) NOT NULL,
  `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP 
);

同步去观察,相对应的3个库中所对应的database中,是否有t_order_0和t_order_2表

MGR各节点状态介绍

server01为master节点,proxy中对应配置的是:ds_2、ds_5

server02、server03 为slave节点,proxy中对应配置的是:ds_0、ds_1、ds_3、ds_4

验证

验证步骤介绍:

step1:按照正常情况,MGR共3个节点,1个master,2个slave节点,按照配置,每个节点上,会有2个database,分别是demo_primary_ds_0demo_primary_ds_1,而每个database中,会有2个t_order表,分别是t_order_0、t_order_1

进行多次的新增,会按照order_id进行选择database,按照order_id进行选择具体的table;根据order_id进行查询时,会自动路由到对应的database中的table上,并查询出具体的数据;同时需要观察读写分离功能;

step2:模拟master节点宕机,此时会产生新的master,目前节点是1个master,1个slave,然后进行多次的新增,会按照order_id进行选择database,按照order_id进行选择具体的table;根据order_id进行查询时,会自动路由到对应的database中的table上,并查询出具体的数据;

**step3:**原master节点恢复,此时该节点会变成新的slave节点,目前是1个master,2个slave,,然后进行增加、删除、修改、查询等操作,并观察实际操作的数据库,新增、修改、删除操作,要在新的master节点上进行,查询在slave节点进行,多次查询,以轮询的方式查询2个slave节点;

**step4:**模拟slave节点宕机,目前节点是1个master,1个slave,然后进行增加、删除、修改、查询等操作,并观察实际操作的数据库

**step5:**恢复slave节点,目前是1个master,2个slave,,然后进行增加、删除、修改、查询等操作

正常情况下操作
新增

在这里插入图片描述

进行多次新增后,观察日志,可以观察到不同的数据,会插入到不同的database中的table

查询

在这里插入图片描述

修改

在这里插入图片描述

删除

在这里插入图片描述

主节点宕机

MGR各节点状态介绍

server03为master节点,proxy中对应配置的是:ds_1、ds_4

server02 为slave节点,proxy中对应配置的是:ds_0、ds_3

新增

在这里插入图片描述

查询

在这里插入图片描述

主节点恢复-新增

MGR各节点状态介绍

server03为master节点,proxy中对应配置的是:ds_1、ds_4

server01、server02 为slave节点,proxy中对应配置的是:ds_0、ds_2、ds_3、ds_5

在这里插入图片描述

主节点恢复-查询

查询宕机时新增的数据

在这里插入图片描述

可以正常查询出结果

主节点恢复-修改

修改宕机时新增的数据

在这里插入图片描述

正常修改

从节点宕机

MGR各节点状态介绍

server03为master节点,proxy中对应配置的是:ds_1、ds_4

server01为slave节点,proxy中对应配置的是:ds_2、ds_5

新增

在这里插入图片描述

查询

在这里插入图片描述

从节点恢复-新增

MGR各节点状态介绍

server03为master节点,proxy中对应配置的是:ds_1、ds_4

server01、server02 为slave节点,proxy中对应配置的是:ds_0、ds_2、ds_3、ds_5

在这里插入图片描述

从节点恢复-查询

查询宕机时新增的数据

在这里插入图片描述

结果正常查询出结果

3.7可能出现的问题

问题1

现象:使用navicat进行主键查询时,如果分库分表的策略是类似于t_order_${order_id % 2},可能会出现如下问题

在这里插入图片描述

解决方案:

将搜索条件修改成Long类型就行,就可以进行分库分表计算

在这里插入图片描述

问题2

如果出现改错误:

### Error querying database.  Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: Unknown system variable 'query_cache_size'
### The error may exist in com/example/testdemo/dao/TOrderMapper.java (best guess)
### The error may involve com.example.testdemo.dao.TOrderMapper.selectList
### The error occurred while executing a query
### Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: Unknown system variable 'query_cache_size'] with root cause
java.sql.SQLException: Unknown system variable 'query_cache_size'
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
	at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
	at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:835)
	at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:455)
	at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
	at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:199)
	at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
	at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358)
	at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
	at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)

解决方案1:

mysql驱动程序的版本为5.1.38,而我本地装的Mysql数据库版本为8.0.26,故原因为mysql驱动程序与mysql数据库版本不匹配。所以需要对Mysql驱动版本升高。同时修改Mysql连接中的相关参数

driver-class-name: com.mysql.cj.jdbc.Driver

解决方案2:

如果按照方案1,还是无法解决上述问题,需要在server.yaml做如下配置:

overwrite: true
proxy-mysql-default-version: 8.0.26
allow.range.query.with.inline.sharding: true

在这里插入图片描述

问题3

现象:

ShardingSphere-Proxy日志中,出现如下错误:

com.mysql.cj.jdbc.MysqlXAException: XAER_RMERR: Fatal error occurred in the transaction branch - check your data for consistency
	at com.mysql.cj.jdbc.MysqlXAConnection.mapXAExceptionFromSQLException(MysqlXAConnection.java:344)
	at com.mysql.cj.jdbc.MysqlXAConnection.recover(MysqlXAConnection.java:189)
	at com.mysql.cj.jdbc.MysqlXAConnection.recover(MysqlXAConnection.java:117)
	at com.atomikos.datasource.xa.RecoveryScan.recoverXids(RecoveryScan.java:32)
	at com.atomikos.recovery.xa.XARecoveryManager.retrievePreparedXidsFromXaResource(XARecoveryManager.java:266)
	at com.atomikos.recovery.xa.XARecoveryManager.recover(XARecoveryManager.java:94)
	at com.atomikos.datasource.xa.XATransactionalResource.recover(XATransactionalResource.java:375)
	at com.atomikos.icatch.imp.RecoveryDomainService.performRecovery(RecoveryDomainService.java:81)
	at com.atomikos.icatch.imp.RecoveryDomainService$1.alarm(RecoveryDomainService.java:56)
	at com.atomikos.timing.PooledAlarmTimer.notifyListeners(PooledAlarmTimer.java:101)
	at com.atomikos.timing.PooledAlarmTimer.run(PooledAlarmTimer.java:88)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: XAER_RMERR: Fatal error occurred in the transaction branch - check your data for consistency
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
	at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
	at com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1202)
	at com.mysql.cj.jdbc.MysqlXAConnection.recover(MysqlXAConnection.java:168)
	... 12 common frames omitted

解决方案:

暂无,目前未发现影响使用

问题4

ShardingSphere-Proxy日志中,出现如下错误:

[2023-04-12 10:54:35.432] [ShardingSphere-Command-31] [ERROR] o.a.s.proxy.frontend.command.CommandExecutorTask - Exception occur: 
org.apache.calcite.runtime.CalciteContextException: At line 0, column 0: Object 'PROFILING' not found within 'INFORMATION_SCHEMA'
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:505)
	at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:932)
	at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:917)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5362)
	at org.apache.calcite.sql.validate.IdentifierNamespace.resolveImpl(IdentifierNamespace.java:138)
	at org.apache.calcite.sql.validate.IdentifierNamespace.validateImpl(IdentifierNamespace.java:188)
	at org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:89)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1107)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:1078)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3381)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3360)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3697)
	at org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:64)
	at org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:89)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1107)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:1078)
	at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:248)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:1053)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:759)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:592)
	at org.apache.shardingsphere.sqlfederation.optimizer.SQLOptimizeEngine.optimize(SQLOptimizeEngine.java:50)
	at org.apache.shardingsphere.sqlfederation.advanced.AdvancedSQLFederationExecutor.execute(AdvancedSQLFederationExecutor.java:146)
	at org.apache.shardingsphere.sqlfederation.advanced.AdvancedSQLFederationExecutor.executeQuery(AdvancedSQLFederationExecutor.java:114)
	at org.apache.shardingsphere.proxy.backend.connector.DatabaseConnector.doExecuteFederation(DatabaseConnector.java:288)
	at org.apache.shardingsphere.proxy.backend.connector.DatabaseConnector.execute(DatabaseConnector.java:205)
	at org.apache.shardingsphere.proxy.frontend.mysql.command.query.text.query.MySQLComQueryPacketExecutor.execute(MySQLComQueryPacketExecutor.java:92)
	at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.doExecuteCommand(CommandExecutorTask.java:114)
	at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.executeCommand(CommandExecutorTask.java:109)
	at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.run(CommandExecutorTask.java:77)
	at com.alibaba.ttl.TtlRunnable.run(TtlRunnable.java:60)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Object 'PROFILING' not found within 'INFORMATION_SCHEMA'
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:505)
	at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:599)
	... 32 common frames omitted

解决方案:

如果使用Navicat连接proxy代理去进行操作,则会报上述错误,如果使用代码连接或者使用shell进行操作,则不会出现此错误

问题5:

Exception in thread "main" java.util.concurrent.ExecutionException: org.apache.shardingsphere.dbdiscovery.mysql.exception.mgr.InvalidMGRReplicationGroupMemberException: `jdbc:mysql://主机1_IP:3306/demo_primary_ds?serverTimezone=UTC&useSSL=false` is not in MGR replication group member in database `database_discovery_db`.
	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
	at org.apache.shardingsphere.dbdiscovery.mysql.type.MGRMySQLDatabaseDiscoveryProvider.checkEnvironment(MGRMySQLDatabaseDiscoveryProvider.java:82)
	at org.apache.shardingsphere.dbdiscovery.algorithm.DatabaseDiscoveryEngine.checkEnvironment(DatabaseDiscoveryEngine.java:58)
	at org.apache.shardingsphere.dbdiscovery.rule.DatabaseDiscoveryRule.findPrimaryReplicaRelationship(DatabaseDiscoveryRule.java:117)
	at org.apache.shardingsphere.dbdiscovery.rule.DatabaseDiscoveryRule.<init>(DatabaseDiscoveryRule.java:89)
	at org.apache.shardingsphere.dbdiscovery.rule.builder.DatabaseDiscoveryRuleBuilder.build(DatabaseDiscoveryRuleBuilder.java:47)
	at org.apache.shardingsphere.dbdiscovery.rule.builder.DatabaseDiscoveryRuleBuilder.build(DatabaseDiscoveryRuleBuilder.java:36)
	at org.apache.shardingsphere.infra.rule.builder.database.DatabaseRulesBuilder.build(DatabaseRulesBuilder.java:64)
	at org.apache.shardingsphere.infra.metadata.database.ShardingSphereDatabase.create(ShardingSphereDatabase.java:87)
	at org.apache.shardingsphere.metadata.factory.ExternalMetaDataFactory.createGenericDatabases(ExternalMetaDataFactory.java:79)
	at org.apache.shardingsphere.metadata.factory.ExternalMetaDataFactory.create(ExternalMetaDataFactory.java:67)
	at org.apache.shardingsphere.mode.metadata.MetaDataContextsFactory.create(MetaDataContextsFactory.java:91)
	at org.apache.shardingsphere.mode.metadata.MetaDataContextsFactory.create(MetaDataContextsFactory.java:68)
	at org.apache.shardingsphere.mode.manager.standalone.StandaloneContextManagerBuilder.build(StandaloneContextManagerBuilder.java:53)
	at org.apache.shardingsphere.proxy.initializer.BootstrapInitializer.createContextManager(BootstrapInitializer.java:69)
	at org.apache.shardingsphere.proxy.initializer.BootstrapInitializer.init(BootstrapInitializer.java:59)
	at org.apache.shardingsphere.proxy.Bootstrap.main(Bootstrap.java:54)
Caused by: org.apache.shardingsphere.dbdiscovery.mysql.exception.mgr.InvalidMGRReplicationGroupMemberException: `jdbc:mysql://主机1_IP:3306/demo_primary_ds?serverTimezone=UTC&useSSL=false` is not in MGR replication group member in database `database_discovery_db`.
	at org.apache.shardingsphere.dbdiscovery.mysql.type.MGRMySQLDatabaseDiscoveryProvider.checkMemberInstanceURL(MGRMySQLDatabaseDiscoveryProvider.java:127)
	at org.apache.shardingsphere.dbdiscovery.mysql.type.MGRMySQLDatabaseDiscoveryProvider.checkDataSourceEnvironment(MGRMySQLDatabaseDiscoveryProvider.java:93)
	at org.apache.shardingsphere.dbdiscovery.mysql.type.MGRMySQLDatabaseDiscoveryProvider.lambda$checkEnvironment$0(MGRMySQLDatabaseDiscoveryProvider.java:79)
	at com.alibaba.ttl.TtlCallable.call(TtlCallable.java:65)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

参考链接:https://github.com/apache/shardingsphere/issues/22251

问题6

at org.apache.shardingsphere.dbdiscovery.rule.builder.DatabaseDiscoveryRuleBuilder.build(DatabaseDiscoveryRuleBuilder.java:36)
	at org.apache.shardingsphere.infra.rule.builder.database.DatabaseRulesBuilder.build(DatabaseRulesBuilder.java:64)
	at org.apache.shardingsphere.infra.metadata.database.ShardingSphereDatabase.create(ShardingSphereDatabase.java:87)
	at org.apache.shardingsphere.metadata.factory.ExternalMetaDataFactory.createGenericDatabases(ExternalMetaDataFactory.java:79)
	at org.apache.shardingsphere.metadata.factory.ExternalMetaDataFactory.create(ExternalMetaDataFactory.java:67)
	at org.apache.shardingsphere.mode.metadata.MetaDataContextsFactory.create(MetaDataContextsFactory.java:91)
	at org.apache.shardingsphere.mode.metadata.MetaDataContextsFactory.create(MetaDataContextsFactory.java:68)
	at org.apache.shardingsphere.mode.manager.standalone.StandaloneContextManagerBuilder.build(StandaloneContextManagerBuilder.java:53)
	at org.apache.shardingsphere.proxy.initializer.BootstrapInitializer.createContextManager(BootstrapInitializer.java:69)
	at org.apache.shardingsphere.proxy.initializer.BootstrapInitializer.init(BootstrapInitializer.java:59)
	at org.apache.shardingsphere.proxy.Bootstrap.main(Bootstrap.java:54)
Caused by: org.apache.shardingsphere.dbdiscovery.mysql.exception.mgr.InvalidMGRReplicationGroupMemberException: `jdbc:mysql://主机1_IP:3306/demo_primary_ds?serverTimezone=UTC&useSSL=false` is not in MGR replication group member in database `database_discovery_db`.
	at org.apache.shardingsphere.dbdiscovery.mysql.type.MGRMySQLDatabaseDiscoveryProvider.checkMemberInstanceURL(MGRMySQLDatabaseDiscoveryProvider.java:127)
	at org.apache.shardingsphere.dbdiscovery.mysql.type.MGRMySQLDatabaseDiscoveryProvider.checkDataSourceEnvironment(MGRMySQLDatabaseDiscoveryProvider.java:93)
	at org.apache.shardingsphere.dbdiscovery.mysql.type.MGRMySQLDatabaseDiscoveryProvider.lambda$checkEnvironment$0(MGRMySQLDatabaseDiscoveryProvider.java:79)
	at com.alibaba.ttl.TtlCallable.call(TtlCallable.java:65)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)```

参考链接: https://github.com/apache/shardingsphere/issues/22251

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值