搭建Redis集群(Linux平台)

注意如果要在两台机器上分别搭建redis集群,则端口号都不能一样,比如第一台机器为7000、7001、7002、7003、7004、7005,第二台机器为7010、7011、7012、7013、7014、7015。
本次只在192.168.80.109上面搭建“伪集群”

修改配置文件

创建六个目录7000、7001、7002,并在这六个目录下创建redis.conf文件:
port 7000
#bind 172.28.37.29
daemonize yes
protected-mode no
requirepass OBxhpshrNpl27Etv
cluster-enabled yes
cluster-config-file nodes_7000.conf
cluster-node-timeout 8000
#appendonly yes
#appendfsync always
logfile "/opt/log/redis/redis_7000.log"
pidfile /var/run/redis_7000.pid

安装ruby

yum install ruby
yum install rubygems

启动、关闭

启动脚本
[root@im-server redis-cluster]# cat start-redis-cluster.sh
for((i=0;i<6;i++));
 do /usr/local/bin/redis-server /opt/redis-cluster/700$i/redis.conf;
done

关闭脚本

[root@im-server redis-cluster]# cat shutdown-redis.sh
for((i=0;i<6;i++));
  do /usr/local/bin/redis-cli -c -h $IP -p 700$i shutdown;
Done
或者
/usr/local/bin/redis-cli -c -h 127.0.0.1 -p 7000 -a OBxhpshrNpl27Etv 
shutdown
/usr/local/bin/redis-cli -c -h 127.0.0.1 -p 7001 -a OBxhpshrNpl27Etv shutdown
/usr/local/bin/redis-cli -c -h 127.0.0.1 -p 7002 -a OBxhpshrNpl27Etv shutdown
/usr/local/bin/redis-cli -c -h 127.0.0.1 -p 7003 -a OBxhpshrNpl27Etv shutdown
/usr/local/bin/redis-cli -c -h 127.0.0.1 -p 7004 -a OBxhpshrNpl27Etv shutdown
/usr/local/bin/redis-cli -c -h 127.0.0.1 -p 7005 -a OBxhpshrNpl27Etv shutdown

创建集群

[root@im-server redis-cluster]# /usr/local/bin/redis-cli --cluster create 192.168.80.109:7000 192.168.80.109:7001 192.168.80.109:7002 192.168.80.109:7003 192.168.80.109:7004 192.168.80.109:7005 --cluster-replicas 1  -a OBxhpshrNpl27Etv
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.80.109:7003 to 192.168.80.109:7000
Adding replica 192.168.80.109:7004 to 192.168.80.109:7001
Adding replica 192.168.80.109:7005 to 192.168.80.109:7002
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: c1964f9ff55c36fe80369b1bb10f51254232993c 192.168.80.109:7000
   slots:[0-5460] (5461 slots) master
M: ab9b225bc80f1a16b4813fa5964b0bb4f8ad4a95 192.168.80.109:7001
   slots:[5461-10922] (5462 slots) master
M: 6c0d7129bf046ca2634c2792a87d8e87e6ccec88 192.168.80.109:7002
   slots:[10923-16383] (5461 slots) master
S: 795f0aa801e76257733d8277ba8a2426058223ac 192.168.80.109:7003
   replicates ab9b225bc80f1a16b4813fa5964b0bb4f8ad4a95
S: 00282c0cb55742f50257b29bed3a9b9b4d00f5d0 192.168.80.109:7004
   replicates 6c0d7129bf046ca2634c2792a87d8e87e6ccec88
S: 147e8ca370a9564ae7e8aa3131bfdb5d2ec546a9 192.168.80.109:7005
   replicates c1964f9ff55c36fe80369b1bb10f51254232993c
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 192.168.80.109:7000)
M: c1964f9ff55c36fe80369b1bb10f51254232993c 192.168.80.109:7000
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 795f0aa801e76257733d8277ba8a2426058223ac 192.168.80.109:7003
   slots: (0 slots) slave
   replicates ab9b225bc80f1a16b4813fa5964b0bb4f8ad4a95
M: 6c0d7129bf046ca2634c2792a87d8e87e6ccec88 192.168.80.109:7002
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: ab9b225bc80f1a16b4813fa5964b0bb4f8ad4a95 192.168.80.109:7001
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 00282c0cb55742f50257b29bed3a9b9b4d00f5d0 192.168.80.109:7004
   slots: (0 slots) slave
   replicates 6c0d7129bf046ca2634c2792a87d8e87e6ccec88
S: 147e8ca370a9564ae7e8aa3131bfdb5d2ec546a9 192.168.80.109:7005
   slots: (0 slots) slave
   replicates c1964f9ff55c36fe80369b1bb10f51254232993c
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
注意:如果报错:[ERR] Node 192.168.80.109:7000 is not empty. Either the node alreadyknows other nodes (check with CLUSTER NODES) or contains some key in database0.
解决方法:
将每个节点下aof、rdb、nodes-700*.conf 本地备份文件删。
172.168.63.201:7001>flushdb #清空当前数据库(可省略)。

测试

[root@im-server redis-cluster]# ./start-redis-cluster.sh
[root@im-server redis-cluster]# ps -ef|grep redis
root     32454     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7000 [cluster]
root     32459     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7001 [cluster]
root     32464     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7002 [cluster]
root     32469     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7003 [cluster]
root     32474     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7004 [cluster]
root     32479     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7005 [cluster]
root     32489 25684  0 13:43 pts/1    00:00:00 grep --color=auto redis
[root@im-server redis-cluster]# /usr/local/bin/redis-cli -c -p 7000
127.0.0.1:7000> set name leo
-> Redirected to slot [5798] located at 127.0.0.1:7001
OK
127.0.0.1:7001> get name
"leo"
127.0.0.1:7001> del name
(integer) 1
127.0.0.1:7001> get name
(nil)
127.0.0.1:7001> exit
[root@im-server redis-cluster]# ps -ef|grep redis
root     32454     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7000 [cluster]
root     32459     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7001 [cluster]
root     32464     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7002 [cluster]
root     32469     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7003 [cluster]
root     32474     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7004 [cluster]
root     32479     1  0 13:43 ?        00:00:00 /usr/local/bin/redis-server *:7005 [cluster]
root     32552 25684  0 13:44 pts/1    00:00:00 grep --color=auto redis
[root@im-server redis-cluster]# ./shutdown-redis.sh
[root@im-server redis-cluster]# ps -ef|grep redis
root     32569 25684  0 13:45 pts/1    00:00:00 grep --color=auto redis
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

心有城府,腹有良谋

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值