一.Keepalived多主模型
Keepalived多主模型概念
如上图,keepalived主从架构性能损耗较严重,如果业务分类明确,则可以配置keepalived多主模型降低损耗,两台keepalived互为主备,如:订单业务走keepalived1,keepalived2做备,商品业务走keepalived2,keepalived1做备。也就是keepalived1和keepalived2各拿一份vip
Keepalived多主模型配置
keepalived1订单业务:
vrrp_instance VI_2 {state BACKUP #设置为BACKUPinterface ens33virtual_router_id 55priority 80advert_int 1authentication {auth_type PASSauth_pass 111156}virtual_ipaddress {192.168.80.40/24192.168.80.41/24192.168.80.42/24}notify_master "/root/sendemail.sh master"notify_backup "/root/sendemail.sh backup"notify_fault "/root/sendemail.sh fault"
}
keepalived1商品业务:
vrrp_instance VI_1 {state MASTER #设置为MASTER interface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.80.50/24192.168.80.51/24192.168.80.52/24}notify_master "/root/sendemail.sh master"notify_backup "/root/sendemail.sh backup"notify_fault "/root/sendemail.sh fault"
}
keepalived2订单业务:
vrrp_instance VI_2 {state MASTERinterface ens33virtual_router_id 55priority 80advert_int 1authentication {auth_type PASSauth_pass 111156}virtual_ipaddress {192.168.80.40/24192.168.80.41/24192.168.80.42/24}notify_master "/root/sendemail.sh master"notify_backup "/root/sendemail.sh backup"notify_fault "/root/sendemail.sh fault"}
keepalived2商品业务:
vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 80advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.80.50/24192.168.80.51/24192.168.80.52/24}notify_master "/root/sendemail.sh master"notify_backup "/root/sendemail.sh backup"notify_fault "/root/sendemail.sh fault"}
验证
如上图,keepalived是商品业务的主节点,拿到商品业务的vip,keepalived2是订单业务的主节点,拿到订单业务的vip,并且互为主备。
如果此时一个keepalived坏了,另一个keepalived将拥有两个业务的vip
二.实现IPVS 高可用
在分布式系统架构中,高可用性设计始终是核心命题。当我们使用 Keepalived 实现 VIP 飘移时,本质上只是解决了 IP 层的可用性问题,这就像为建筑搭建了稳固的地基,但要让大楼真正运转起来,还需要在服务层构建完整的容错机制。LVS(Linux Virtual Server)正是实现这一目标的关键组件,它与 Keepalived 的 VRRP 协议形成了完美的能力互补。
虚拟服务器配置
virtual_server IP port { # 定义虚拟服务器,指定监听的IP和端口delay_loop<INT> # 健康检查的时间间隔(单位:秒)lib_algo rr|wr1|cln1c|lib|cln1dh # 负载均衡算法(rr=轮询,wr1=加权轮询,cln1c=最少连接,cln1dh=目标地址哈希)lib_kind NAT|ON|TUN # 数据包转发模式(NAT=网络地址转换,TUN=隧道模式,ON=直接路由)persistence_timeout <INT> # 会话保持时间(单位:秒,0表示不启用)protocol TCP|USIP|SCTP # 协议类型(TCP/USIP[UDP]/SCTP)sorry_server <IPADDR> <PORT> # 备用服务器地址,当所有真实服务器宕机时启用real_server <IPADDR> <PORT> { # 定义真实服务器(后端节点)weight <INT> # 服务器权重(权重越高分配的请求越多)notify_up <STRING> # 服务器上线时触发的脚本/命令notify_down <STRING> # 服务器下线时触发的脚本/命令# 健康检查方法(只能选其一):HTTP_GET|SSL_GET { # HTTP/HTTPS 检查(需配置具体URL和状态码)url { path <PATH> } # 示例:url { path "/health" status_code 200 }}TCP_CHECK { ... } # TCP端口检查(默认检测端口连通性)SNTP_CHECK { ... } # SNTP协议检查(用于时间服务器)MISC_CHECK { ... } # 自定义脚本检查(需指定外部脚本路径)}
}
例如:
vrrp_instance VI_1 {state MASTERinterface eth0virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.1.100}
}virtual_server 192.168.1.100 80 {delay_loop 6lb_algo wrrlb_kind DRprotocol TCPreal_server 192.168.1.101 80 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3}}
}
该配置展示了
-
VRRP 实现 VIP(192.168.1.100) 的飘移
-
LVS 使用 DR 模式进行流量分发
-
基于 TCP 连接的健康检查机制
-
权重为 3 的服务器优先级设置
当某台真实服务器不可达时,LVS 会自动将其移出服务池,同时 Keepalived 通过 VRRP 协议维护 VIP 的可用性,形成双重保障机制。