What is Pacemaker?
Pacemaker是一个集群资源管理器。它利用集群基础构件(OpenAIS 、heartbeat或corosync)提供的消息和成员管理能力来探测并从节点或资源级别的故障中恢复,以实现群集服务(亦称资源)的最大可用性。
前提:
1)本配置共有两个测试节点,分别node1.a.org和node2.a.org,相的IP地址分别为192.168.0.5和192.168.0.6;
2)node1和node2两个节点已经配置好了基于openais/corosync的集群;且node1和node2也已经配置好了Primary/Secondary模型的drbd设备/dev/drbd0,且对应的资源名称为web;如果您此处的配置有所不同,请确保后面的命令中使用到时与您的配置修改此些信息与您所需要的配置保持一致;
3)系统为rhel5.4,x86平台;
1、查看当前集群的配置信息,确保已经配置全局属性参数为两节点集群所适用:
1 | # crm configure show |
1 2 3 4 5 6 7 8 9 | node node1.a.org node node2.a.org property $ id = "cib-bootstrap-options" \ dc -version= "1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87" \ cluster-infrastructure= "openais" \ expected-quorum-votes= "2" \ stonith-enabled= "false" \ last-lrm-refresh= "1308059765" \ no-quorum-policy= "ignore" |
在如上输出的信息中,请确保有stonith-enabled和no-quorum-policy出现且其值与如上输出信息中相同。否则,可以分别使用如下命令进行配置:
1 2 | # crm configure property stonith-enabled=false # crm configure property no-quorum-policy=ignore |
2、将已经配置好的drbd设备/dev/drbd0定义为集群服务;
1)按照集群服务的要求,首先确保两个节点上的drbd服务已经停止,且不会随系统启动而自动启动:
1 | # drbd-overview |
1 | 0:web Unconfigured . . . . |
1 | # chkconfig drbd off |
2)配置drbd为集群资源:
提供drbd的RA目前由OCF归类为linbit,其路径为/usr/lib/ocf/resource.d/linbit/drbd。我们可以使用如下命令来查看此RA及RA的meta信息:
1 | # crm ra classes |
1 2 3 4 | heartbeat lsb ocf / heartbeat linbit pacemaker stonith |
1 | # crm ra list ocf linbit |
1 | drbd |
1 | # crm ra info ocf:linbit:drbd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | This resource agent manages a DRBD resource as a master /slave resource. DRBD is a shared-nothing replicated storage device. (ocf:linbit:drbd) Master /Slave OCF Resource Agent for DRBD Parameters (* denotes required, [] the default): drbd_resource* (string): drbd resource name The name of the drbd resource from the drbd.conf file . drbdconf (string, [ /etc/drbd .conf]): Path to drbd.conf Full path to the drbd.conf file . Operations' defaults (advisory minimum): start timeout=240 promote timeout=90 demote timeout=90 notify timeout=90 stop timeout=100 monitor_Slave interval=20 timeout=20 start-delay=1m monitor_Master interval=10 timeout=20 start-delay=1m |
drbd需要同时运行在两个节点上,但只能有一个节点(primary/secondary模型)是Master,而另一个节点为Slave;因此,它是一种比较特殊的集群资源,其资源类型为多态(Multi-state)clone类型,即主机节点有Master和Slave之分,且要求服务刚启动时两个节点都处于slave状态。
1 | [root@node1 ~] # crm |
1 2 3 | crm(live) # configure crm(live)configure # primitive webdrbd ocf:linbit:drbd params drbd_resource=web op monitor role=Master interval=50s timeout=30s op monitor role=Slave interval=60s timeout=30s crm(live)configure # master MS_Webdrbd webdrbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" |
1 2 3 4 | crm(live)configure # show webdrbd primitive webdrbd ocf:linbit:drbd \ params drbd_resource= "web" \ op monitor interval= "15s" |
1 2 3 4 5 | crm(live)configure # show MS_Webdrbd ms MS_Webdrbd webdrbd \ meta master-max= "1" master-node-max= "1" clone-max= "2" clone-node-max= "1" notify= "true" crm(live)configure # verify crm(live)configure # commit |
查看当前集群运行状态:
1 | # crm status |
1 2 3 4 5 6 7 8 | ============ Last updated: Fri Jun 17 06:24:03 2011 Stack: openais Current DC: node2.a.org - partition with quorum Version: 1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87 2 Nodes configured, 2 expected votes 1 Resources configured. ============ |
1 2 3 4 5 | Online: [ node2.a.org node1.a.org ] Master /Slave Set: MS_Webdrbd Masters: [ node2.a.org ] Slaves: [ node1.a.org ] |
由上面的信息可以看出此时的drbd服务的Primary节点为node2.a.org,Secondary节点为node1.a.org。当然,也可以在node2上使用如下命令验正当前主机是否已经成为web资源的Primary节点:
1 | # drbdadm role web |
1 | Primary /Secondary |
3)为Primary节点上的web资源创建自动挂载的集群服务
MS_Webdrbd的Master节点即为drbd服务web资源的Primary节点,此节点的设备/dev/drbd0可以挂载使用,且在某集群服务的应用当中也需要能够实现自动挂载。假设我们这里的web资源是为Web服务器集群提供网页文件的共享文件系统,其需要挂载至/www(此目录需要在两个节点都已经建立完成)目录。
此外,此自动挂载的集群资源需要运行于drbd服务的Master节点上,并且只能在drbd服务将某节点设置为Primary以后方可启动。因此,还需要为这两个资源建立排列约束和顺序约束。
1 | # crm |
1 2 3 4 5 6 | crm(live) # configure crm(live)configure # primitive WebFS ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/www" fstype="ext3" crm(live)configure # colocation WebFS_on_MS_webdrbd inf: WebFS MS_Webdrbd:Master crm(live)configure # order WebFS_after_MS_Webdrbd inf: MS_Webdrbd:promote WebFS:start crm(live)configure # verify crm(live)configure # commit |
查看集群中资源的运行状态:
1 | crm status |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | ============ Last updated: Fri Jun 17 06:26:03 2011 Stack: openais Current DC: node2.a.org - partition with quorum Version: 1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ node2.a.org node1.a.org ] Master /Slave Set: MS_Webdrbd Masters: [ node2.a.org ] Slaves: [ node1.a.org ] WebFS (ocf::heartbeat:Filesystem): Started node2.a.org |
由上面的信息可以发现,此时WebFS运行的节点和drbd服务的Primary节点均为node2.a.org;我们在node2上复制一些文件至/www目录(挂载点),而后在故障故障转移后查看node1的/www目录下是否存在这些文件。
1 | # cp /etc/rc./rc.sysinit /www |
下面我们模拟node2节点故障,看此些资源可否正确转移至node1。
以下命令在Node2上执行:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | # crm node standby # crm status ============ Last updated: Fri Jun 17 06:27:03 2011 Stack: openais Current DC: node2.a.org - partition with quorum Version: 1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Node node2.a.org: standby Online: [ node1.a.org ] Master /Slave Set: MS_Webdrbd Masters: [ node1.a.org ] Stopped: [ webdrbd:0 ] WebFS (ocf::heartbeat:Filesystem): Started node1.a.org |
由上面的信息可以推断出,node2已经转入standby模式,其drbd服务已经停止,但故障转移已经完成,所有资源已经正常转移至node1。
在node1可以看到在node2作为primary节点时产生的保存至/www目录中的数据,在node1上均存在一份拷贝。
让node2重新上线:
1 | # crm node online |
1 | [root@node2 ~]# crm status |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | ============ Last updated: Fri Jun 17 06:30:05 2011 Stack: openais Current DC: node2.a.org - partition with quorum Version: 1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ node2.a.org node1.a.org ] Master /Slave Set: MS_Webdrbd Masters: [ node1.a.org ] Slaves: [ node2.a.org ] WebFS (ocf::heartbeat:Filesystem): Started node1.a.org
|