docker 部署Redis集群(三主三从,以及扩容、缩容)

1:创建6个redis容器

docker run -d --name redis01 --net host --privileged=true -v /opt/redis/redis01:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381
docker run -d --name redis02 --net host --privileged=true -v /opt/redis/redis02:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382
docker run -d --name redis03 --net host --privileged=true -v /opt/redis/redis03:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383
docker run -d --name redis04 --net host --privileged=true -v /opt/redis/redis04:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384
docker run -d --name redis05 --net host --privileged=true -v /opt/redis/redis05:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385
docker run -d --name redis06 --net host --privileged=true -v /opt/redis/redis06:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386

2:查看容器运行状态

[root@localhost redis]# docker ps 
CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS          PORTS     NAMES
2230e0a5bf5c   redis:6.0.8   "docker-entrypoint.s…"   7 seconds ago    Up 6 seconds              redis06
0bc9f5da8601   redis:6.0.8   "docker-entrypoint.s…"   9 seconds ago    Up 8 seconds              redis05
e1431fb85072   redis:6.0.8   "docker-entrypoint.s…"   9 seconds ago    Up 8 seconds              redis04
01c2ff5e0090   redis:6.0.8   "docker-entrypoint.s…"   9 seconds ago    Up 8 seconds              redis03
88892f9eb9db   redis:6.0.8   "docker-entrypoint.s…"   9 seconds ago    Up 9 seconds              redis02
a13bfc991867   redis:6.0.8   "docker-entrypoint.s…"   44 seconds ago   Up 43 seconds             redis01

3:创建redis 3主3从集群

[root@localhost ~]# docker exec -it redis01 /bin/bash--cluster-replicas 1 表示为每个master,创建一个slave节点
root@localhost:/data# redis-cli --cluster create 192.168.1.31:6381 192.168.1.31:6382 192.168.1.31:6383 192.168.1.31:6384 192.168.1.31:6385 192.168.1.31:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.1.31:6385 to 192.168.1.31:6381
Adding replica 192.168.1.31:6386 to 192.168.1.31:6382
Adding replica 192.168.1.31:6384 to 192.168.1.31:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[0-5460] (5461 slots) master
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[5461-10922] (5462 slots) master
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[10923-16383] (5461 slots) master
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384replicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385replicates c2436c65625e1d74d8ea5bde328df04699d494e9
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386replicates cd9a9149593770a920258bf75e1235ca4b904cd5
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[0-5460] (5461 slots) master1 additional replica(s)
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[5461-10922] (5462 slots) master1 additional replica(s)
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[10923-16383] (5461 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

4:查看集群状态

root@localhost:/data# redis-cli -p 6381
127.0.0.1:6381> keys *
(empty array)
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:300
cluster_stats_messages_pong_sent:310
cluster_stats_messages_sent:610
cluster_stats_messages_ping_received:305
cluster_stats_messages_pong_received:300
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:610127.0.0.1:6381> cluster nodes
cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381@16381 myself,master - 0 1700124241000 1 connected 0-5460
eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384@16384 slave 89228357317c6b7d6850fffa2f0819085def1a2f 0 1700124242176 2 connected
89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382@16382 master - 0 1700124242000 2 connected 5461-10922
5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386@16386 slave cd9a9149593770a920258bf75e1235ca4b904cd5 0 1700124240000 1 connected
d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385@16385 slave c2436c65625e1d74d8ea5bde328df04699d494e9 0 1700124239000 3 connected
c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383@16383 master - 0 1700124241149 3 connected 10923-16383可以看出主从关系如下:
192.168.1.31:6381 -> 192.168.1.31:6386
192.168.1.31:6382 -> 192.168.1.31:6384
192.168.1.31:6383 -> 192.168.1.31:6385

5:存储数据,必须连接redis集群,不能连接单节点

root@localhost:/data# redis-cli -p 6381 -c
127.0.0.1:6381> keys *
(empty array)
127.0.0.1:6381> set k1 v1
(error) MOVED 12706 192.168.1.31:6383
127.0.0.1:6381> set k2 v2
OK
127.0.0.1:6381> set k3 v3
OK
127.0.0.1:6381> set k4 v4
(error) MOVED 8455 192.168.1.31:6382
127.0.0.1:6381> 127.0.0.1:6381> set k1 v-cluster1
-> Redirected to slot [12706] located at 192.168.1.31:6383
OK
192.168.1.31:6383> set k2 v-cluster2
-> Redirected to slot [449] located at 192.168.1.31:6381
OK
192.168.1.31:6381> set k3 v3
OK
192.168.1.31:6381> set k4 v4
-> Redirected to slot [8455] located at 192.168.1.31:6382
OK

6:主从容错切换迁移
6.1:查看集群信息

root@localhost:/data# redis-cli --cluster check 192.168.1.31:6381
192.168.1.31:6381 (cd9a9149...) -> 2 keys | 5461 slots | 1 slaves.
192.168.1.31:6382 (89228357...) -> 1 keys | 5462 slots | 1 slaves.
192.168.1.31:6383 (c2436c65...) -> 1 keys | 5461 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[0-5460] (5461 slots) master1 additional replica(s)
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[5461-10922] (5462 slots) master1 additional replica(s)
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[10923-16383] (5461 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

6.2:停止6381,redis01

[root@localhost redis]# docker stop redis01

6.3:查看集群状态

[root@localhost redis]# docker exec -it redis02 /bin/bash
root@localhost:/data# redis-cli -p 6382 -c查看集群状态:
127.0.0.1:6382> cluster nodes
eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384@16384 slave 89228357317c6b7d6850fffa2f0819085def1a2f 0 1700133223366 2 connected
5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386@16386 master - 0 1700133224000 7 connected 0-5460
d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385@16385 slave c2436c65625e1d74d8ea5bde328df04699d494e9 0 1700133224388 3 connected
89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382@16382 myself,master - 0 1700133223000 2 connected 5461-10922
cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381@16381 master,fail - 1700133123340 1700133116000 1 disconnected
c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383@16383 master - 0 1700133225412 3 connected 10923-16383192.168.1.31:6386,成为master了,192.168.1.31:6381宕机了。# 可以正常查询数据
127.0.0.1:6382> get k1
-> Redirected to slot [12706] located at 192.168.1.31:6383
"v-cluster1"
192.168.1.31:6383> get k2
-> Redirected to slot [449] located at 192.168.1.31:6386
"v-cluster2"
192.168.1.31:6386> get k3
"v3"
192.168.1.31:6386> get k4
-> Redirected to slot [8455] located at 192.168.1.31:6382
"v4"

6.4:启动redis01

[root@localhost redis]# docker start redis01

6.5:查看redis 集群状况

[root@localhost redis]# docker exec -it redis02 /bin/bash
root@localhost:/data# redis-cli -p 6382 -c
127.0.0.1:6382> cluster nodes
eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384@16384 slave 89228357317c6b7d6850fffa2f0819085def1a2f 0 1700134213000 2 connected
5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386@16386 master - 0 1700134213004 7 connected 0-5460
d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385@16385 slave c2436c65625e1d74d8ea5bde328df04699d494e9 0 1700134214020 3 connected
89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382@16382 myself,master - 0 1700134211000 2 connected 5461-10922
cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381@16381 slave 5b40728c470dac59556f7b51866e590e9038bbd9 0 1700134211000 7 connected
c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383@16383 master - 0 1700134212000 3 connected 10923-163836381成为slave,6386是master

6.6:恢复成原来的 6381位master,6386为slave,只需要重启redis06

docker stop redis06
docker start redis06

6.7:检查集群状况

root@localhost:/data# redis-cli --cluster check 192.168.1.31:6381
192.168.1.31:6381 (cd9a9149...) -> 2 keys | 5461 slots | 1 slaves.
192.168.1.31:6383 (c2436c65...) -> 1 keys | 5461 slots | 1 slaves.
192.168.1.31:6382 (89228357...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[0-5460] (5461 slots) master1 additional replica(s)
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[10923-16383] (5461 slots) master1 additional replica(s)
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[5461-10922] (5462 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

7:主从扩容,增加一个节点6387为master,一个节点6388为slave

docker run -d --name redis07 --net host --privileged=true -v /opt/redis/redis07:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6387
docker run -d --name redis08 --net host --privileged=true -v /opt/redis/redis08:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6388[root@localhost redis]# docker exec -it redis07 /bin/bash
将新增的6387作为master节点加入集群
6387就是将要作为master新增节点
6381就是原来集群节点里面的领导
root@localhost:/data# redis-cli --cluster add-node 192.168.1.31:6387 192.168.1.31:6381
>>> Adding node 192.168.1.31:6387 to cluster 192.168.1.31:6381
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[0-5460] (5461 slots) master1 additional replica(s)
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[10923-16383] (5461 slots) master1 additional replica(s)
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[5461-10922] (5462 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.1.31:6387 to make it join the cluster.
[OK] New node added correctly.root@localhost:/data# redis-cli --cluster check 192.168.1.31:6381
192.168.1.31:6381 (cd9a9149...) -> 2 keys | 5461 slots | 1 slaves.
192.168.1.31:6387 (f32e0d73...) -> 0 keys | 0 slots | 0 slaves.
192.168.1.31:6383 (c2436c65...) -> 1 keys | 5461 slots | 1 slaves.
192.168.1.31:6382 (89228357...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[0-5460] (5461 slots) master1 additional replica(s)
M: f32e0d7320635a5873beb3594927ed6eea318976 192.168.1.31:6387slots: (0 slots) master
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[10923-16383] (5461 slots) master1 additional replica(s)
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[5461-10922] (5462 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.重新分配槽号:
root@localhost:/data# redis-cli --cluster reshard 192.168.1.31:6381
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[0-6826],[10923-12287] (8192 slots) master1 additional replica(s)
M: f32e0d7320635a5873beb3594927ed6eea318976 192.168.1.31:6387slots: (0 slots) master
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096 
What is the receiving node ID? f32e0d7320635a5873beb3594927ed6eea318976 #6387的节点id
Please enter all the source node IDs.Type 'all' to use all the nodes as source nodes for the hash slots.Type 'done' once you entered all the source nodes IDs.
Source node #1: all   #从其他3个主节点拿槽位root@localhost:/data# redis-cli --cluster check 192.168.1.31:6381  
192.168.1.31:6381 (cd9a9149...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.31:6387 (f32e0d73...) -> 1 keys | 4096 slots | 0 slaves.
192.168.1.31:6383 (c2436c65...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.31:6382 (89228357...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)
M: f32e0d7320635a5873beb3594927ed6eea318976 192.168.1.31:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covere可以看到6387是3个新的区间,以前的还是连续?
重新分配成本太高,所以3家各匀出来一部分,从6381、6382、6383三个旧节点,分别匀出1364个槽位给新节点6387给6387 添加slave 6388,f32e0d7320635a5873beb3594927ed6eea318976是6387的编号
root@localhost:/data# redis-cli --cluster add-node 192.168.1.31:6388 192.168.1.31:6387 --cluster-slave --cluster-master-id f32e0d7320635a5873beb3594927ed6eea318976
>>> Adding node 192.168.1.31:6388 to cluster 192.168.1.31:6387
>>> Performing Cluster Check (using node 192.168.1.31:6387)
M: f32e0d7320635a5873beb3594927ed6eea318976 192.168.1.31:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.1.31:6388 to make it join the cluster.
Waiting for the cluster to join
>>> Configure node as replica of 192.168.1.31:6387.
[OK] New node added correctly.root@localhost:/data# redis-cli --cluster check 192.168.1.31:6381
192.168.1.31:6381 (cd9a9149...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.31:6387 (f32e0d73...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.31:6383 (c2436c65...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.31:6382 (89228357...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)
M: f32e0d7320635a5873beb3594927ed6eea318976 192.168.1.31:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master1 additional replica(s)
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
S: a8fd323608979efd31be1222d281db20b250820b 192.168.1.31:6388slots: (0 slots) slavereplicates f32e0d7320635a5873beb3594927ed6eea318976
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

8:主从缩容,将6387、6388踢出集群,恢复3主3从

1:先清除从节点6388
root@localhost:/data# redis-cli --cluster del-node 192.168.1.31:6388 a8fd323608979efd31be1222d281db20b250820b
>>> Removing node a8fd323608979efd31be1222d281db20b250820b from cluster 192.168.1.31:6388
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
root@localhost:/data# 
root@localhost:/data# redis-cli --cluster check 192.168.1.31:6381                                            
192.168.1.31:6381 (cd9a9149...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.31:6387 (f32e0d73...) -> 1 keys | 4096 slots | 0 slaves.
192.168.1.31:6383 (c2436c65...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.31:6382 (89228357...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)
M: f32e0d7320635a5873beb3594927ed6eea318976 192.168.1.31:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
检查发现6388被删除了,只剩下7台redis 2:清理出来的槽号重新分配
将6387的槽号清空,重新分配,本例将清出来的槽号都给6381
root@localhost:/data# redis-cli --cluster reshard 192.168.1.31:6381
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[1365-5460] (4096 slots) master1 additional replica(s)
M: f32e0d7320635a5873beb3594927ed6eea318976 192.168.1.31:6387slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096 
What is the receiving node ID? cd9a9149593770a920258bf75e1235ca4b904cd5   #6381节点id,用来接收空出来的槽号
Source node #1: f32e0d7320635a5873beb3594927ed6eea318976                  #6387节点id,被删除的那个
Source node #2: doneMoving slot 12284 from f32e0d7320635a5873beb3594927ed6eea318976Moving slot 12285 from f32e0d7320635a5873beb3594927ed6eea318976Moving slot 12286 from f32e0d7320635a5873beb3594927ed6eea318976Moving slot 12287 from f32e0d7320635a5873beb3594927ed6eea318976
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 12284 from 192.168.1.31:6387 to 192.168.1.31:6381: 
Moving slot 12285 from 192.168.1.31:6387 to 192.168.1.31:6381: 
Moving slot 12286 from 192.168.1.31:6387 to 192.168.1.31:6381: 
Moving slot 12287 from 192.168.1.31:6387 to 192.168.1.31:6381:root@localhost:/data# redis-cli --cluster check 192.168.1.31:6381  
192.168.1.31:6381 (cd9a9149...) -> 2 keys | 8192 slots | 1 slaves.
192.168.1.31:6387 (f32e0d73...) -> 0 keys | 0 slots | 0 slaves.
192.168.1.31:6383 (c2436c65...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.31:6382 (89228357...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[0-6826],[10923-12287] (8192 slots) master1 additional replica(s)
M: f32e0d7320635a5873beb3594927ed6eea318976 192.168.1.31:6387slots: (0 slots) master
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.6387的4096个槽位都给6381了,6381有8192个槽位了3:再删除6387
root@localhost:/data# redis-cli --cluster del-node 192.168.1.31:6387 f32e0d7320635a5873beb3594927ed6eea318976
>>> Removing node f32e0d7320635a5873beb3594927ed6eea318976 from cluster 192.168.1.31:6387
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.4:恢复成3主3从
root@localhost:/data# redis-cli --cluster check 192.168.1.31:6381                                            
192.168.1.31:6381 (cd9a9149...) -> 2 keys | 8192 slots | 1 slaves.
192.168.1.31:6383 (c2436c65...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.31:6382 (89228357...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.31:6381)
M: cd9a9149593770a920258bf75e1235ca4b904cd5 192.168.1.31:6381slots:[0-6826],[10923-12287] (8192 slots) master1 additional replica(s)
S: eac210d52a6c6ddeb9556ffa1820d13a89828264 192.168.1.31:6384slots: (0 slots) slavereplicates 89228357317c6b7d6850fffa2f0819085def1a2f
S: d8f0436ada2c423bc07d8cba38461eb3bb00ca3a 192.168.1.31:6385slots: (0 slots) slavereplicates c2436c65625e1d74d8ea5bde328df04699d494e9
S: 5b40728c470dac59556f7b51866e590e9038bbd9 192.168.1.31:6386slots: (0 slots) slavereplicates cd9a9149593770a920258bf75e1235ca4b904cd5
M: c2436c65625e1d74d8ea5bde328df04699d494e9 192.168.1.31:6383slots:[12288-16383] (4096 slots) master1 additional replica(s)
M: 89228357317c6b7d6850fffa2f0819085def1a2f 192.168.1.31:6382slots:[6827-10922] (4096 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/148053.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

在线 sha1 加密

ttmd5 http://www.ttmd5.com/hash.php?type5 qqxiuzi https://www.qqxiuzi.cn/bianma/sha-1.htm jb51 http://tools.jb51.net/password/sha_encode

Kubernetes实战(五)-pod之间网络请求实战

1 同namespace内pod网络请求 1.1 创建namespace ygq $ kubectl create namespace ygq namespace/ygq created 1.2 创建svc和deployment 在naemspace ygq下创建两个应用:nginx和nginx-test。 1.2.1 部署应用nginx $ cat nginx-svc.yaml apiVersion: v1 kind: …

立哥国家示范项目-5G智慧文旅

项目总体技术方案: 1、旅游5G专网建设:是基于公网授权频谱,采用专线形式,使用MEC服务器为用户提供服务,边缘计算使用Edge VLAVR支持多类型应用,并通过编排实现边缘业务的构建。解决了信号密度覆盖小、强度弱的问题。 …

代码随想录二刷 | 数组 | 总结篇

代码随想录二刷 | 数组 | 总结篇 基础知识二分查找移除元素有序数组的平方长度最小的数组最小覆盖子串螺旋数组 基础知识 定义:数组是存放在连续内存空间上的相同类型数据的集合 特点: 数组下标从 0 开始数组内存空间的地址是连…

Golang Context 的并发安全性探究

在 Golang 中,Context 是一个用于管理 goroutine 生命周期、传递请求和控制信息的重要机制。然而,当多个 goroutine 同时使用 Context 时,很容易出现并发安全性问题。本文将探讨如何正确使用 Context 并保证其在并发环境下的安全性。 1. Con…

23111707[含文档+PPT+源码等]计算机毕业设计基于javawebmysql的旅游网址前后台-全新项目

文章目录 **软件开发环境及开发工具:****功能介绍:****论文截图:****实现:****代码:** 编程技术交流、源码分享、模板分享、网课教程 🐧裙:776871563 软件开发环境及开发工具: 前端使用技术&a…

mock测试数据

1.下载一个jar 架包 地址:链接:https://pan.baidu.com/s/1G5rVF5LlIYpyU-_KHsGjOA?pwdab12 提取码:ab12 2.配置当前电脑java环境变量 3.在同一文件目录下创建json 数据4.在终端切换到当前目录下启动服务, java -jar ./moco-r…

力扣:171. Excel 表列序号(Python3)

题目: 给你一个字符串 columnTitle ,表示 Excel 表格中的列名称。返回 该列名称对应的列序号 。 例如: A -> 1 B -> 2 C -> 3 ... Z -> 26 AA -> 27 AB -> 28 ... 来源:力扣(LeetCode) …

使用百度翻译API或腾讯翻译API做一个小翻译工具

前言 书到用时方恨少,只能临时抱佛脚。英文pdf看不懂,压根看不懂。正好有百度翻译API和腾讯翻译API,就利用两个API自己写一个简单的翻译工具,充分利用资源,用的也放心。 前期准备 关键肯定是两大厂的翻译API&#x…

IDEA 集成 Docker 插件一键部署 SpringBoot 应用

目录 前言IDEA 安装 Docker 插件配置 Docker 远程服务器编写 DockerFileSpringBoot 部署配置SpringBoot 项目部署结语 前言 随着容器化技术的崛起,Docker成为了现代软件开发的关键工具。在Java开发中,Spring Boot是一款备受青睐的框架,然而&…

kubenetes-服务发现和负载均衡

一、服务发布 kubenetes把服务发布至集群内部或者外部,服务的三种不同类型: ClusterlPNodePortLoadBalancer ClusterIP是发布至集群内部的一个虚拟IP,通过负载均衡技术转发到不同的pod中。 NodePort解决的是集群外部访问的问题,用户可能不…

debian 修改镜像源为阿里云【详细步骤】

文章目录 修改步骤第 1 步:安装 vim 软件第 2 步:备份源第 3 步:修改为阿里云镜像参考👉 背景:在 Docker 中安装了 jenkins 容器。查看系统,发现是 debian 11(bullseye)。 👉 目标:修改 debian bullseye 的镜像为阿里云镜像,加速软件安装。 修改步骤 第 1 步:…

限制Domain Admin登录非域控服务器和用户计算机

限制Domain Admin管理员使用敏感管理员帐户(域或林中管理员组、域管理员组和企业管理员组中的成员帐户)登录到信任度较低的服务器和用户端计算机。 此限制可防止管理员通过登录到信任度较低的计算机来无意中增加凭据被盗的风险。 建议采用的策略 建议使用以下策略限制对信任度…

SPASS-偏相关分析

基本概念 偏相关分析的任务就是在研究两个变量之间的线性相关关系时控制可能对其产生影响的变量,这种相关系数称为偏相关系数。偏相关系数的数值和简单相关系数的数值常常是不同的,在计算简单相关系数时,所有其他自变量不予考虑。 统计原理 控制一个变量和控制两个变量的偏…

Python winreg将cmd/PowerShell(管理员)添加到右键菜单

效果 1. 脚本 用管理员权限运行,重复执行会起到覆盖效果(根据sub_key)。 icon自己设置。text可以自定义。sub_key可以改但不推荐(避免改成和系统已有项冲突的)。command不要改。 from winreg import *registry r&q…

Flutter执行flutter doctor报错HTTP Host Availability

问题描述 [!] HTTP Host Availability✗ HTTP host https://maven.google.com/ is not reachable. Reason: An erroroccurred while checking the HTTP host: Operation timed out解决方案 将文件flutter/packages/flutter_tools/lib/src/http_host_validator.dart中的https:…

在 Qt 框架中,有许多内置的信号可用于不同的类和对象\triggered

在 Qt 框架中,有许多内置的信号可用于不同的类和对象 以下是一些常见的内置信号的示例: clicked():按钮(QPushButton、QToolButton 等)被点击时触发的信号。 pressed() 和 released():按钮被按下和释放时…

ubuntu20.04安装cv2

查看ubuntu的版本 cat /etc/lsb-release DISTRIB_IDUbuntu DISTRIB_RELEASE20.04 DISTRIB_CODENAMEfocal DISTRIB_DESCRIPTION"Ubuntu 20.04.3 LTS"更改镜像源 cp /etc/apt/sources.list /etc/apt/sources.list.bak cat > /etc/apt/sources.listdeb http://mirr…

使用Docker部署Python Flask应用的完整教程

一、引言 Docker是一种开源的容器化平台,可以将应用程序及其依赖项打包成一个独立的容器,实现快速部署和跨平台运行。本文将详细介绍如何使用Docker来部署Python Flask应用程序,帮助开发者更高效地构建和部署应用。 二、准备工作 在开始之前…

4-flask-cbv源码、Jinja2模板、请求响应、flask中的session、flask项目参考

1 flask中cbv源码 2 Jinja2模板 3 请求响应 4 flask中的session 5 flask项目参考 1 flask中cbv源码 ***flask的官网文档:***https://flask.palletsprojects.com/en/3.0.x/views/1 cbv源码执行流程1 请求来了,路由匹配成功---》执行ItemAPI.as_view(item…