一、實驗圖:
二、實驗環境準備:
1)時間同步(172.16.2.15是時間服務器)
[root@web1 ~]# crontab -e */5 * * * * /usr/sbin/ntpdate 172.16.2.15
[root@web2 ~]# crontab -e */5 * * * * /usr/sbin/ntpdate 172.16.2.15
2)確保web服務器可以基于主機名通信(修改hosts文件)
[root@web1 ~]# vim /etc/hosts 172.16.2.12 web1.linux.com web1 172.16.2.14 web2.linux.com web2
[root@web2 ~]# vim /etc/hosts 172.16.2.12 web1.linux.com web1 172.16.2.14 web2.iinux.com web2
3)確??梢曰趕sh秘鑰方式登錄
[root@web1 ~]# ssh-keygen -P '' [root@web1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@172.16.2.14
[root@web2 ~]# ssh-keygen -P '' [root@web2 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@172.16.2.12
4)測試:
[root@web1 ~]# date; ssh web2 'date'; Mon Jun 29 11:09:12 CST 2015 Mon Jun 29 11:09:12 CST 2015
[root@web2 ~]# date; ssh web1 'date' Mon Jun 29 11:09:42 CST 2015 Mon Jun 29 11:09:42 CST 2015
三、安裝配置corosync;pacemaker
安裝corosync
[root@web1 ~]# yum -y install corosync
[root@web2 ~]# yum -y install corosync
安裝pacemaker
[root@web1 ~]# yum -y install pacemaker
[root@web2 ~]# yum -y install pacemaker
配置corosync
[root@web1 ~]# cd /etc/corosync \\切換到corosync的配置文件目錄 [root@web1 corosync]# cp corosync.conf.example corosync.conf \\提供corosync的配置文件 [root@web1 corosync]# vim corosync.conf compatibility: whitetank \\兼容舊版本的corosync totem { \\用來設置監聽集群之間心跳信息傳遞機制 version: 2 \\corosync的版本,默認即可 secauth: on \\啟用安全秘鑰機制 threads: 0 \\啟動多少個線程傳遞心跳信息 interface { ringnumber: 0 \\循環次數 bindnetaddr: 172.16.2.0 \\綁定的網絡地址,是一個網絡地址不是具體的ip地址 mcastaddr: 237.225.10.1 \\定義組播地址 mcastport: 5405 \\定義監聽端口號 ttl: 1 \\ 信息傳遞只允許在當前網絡中傳輸 } } logging { fileline: off \\默認即可 to_stderr: no \\是否將錯誤輸出到終端,no為禁止輸出到終端,保持默認即可 to_logfile: yes \\記錄為日志文件 logfile: /var/log/cluster/corosync.log \\記錄日志文件路徑 #to_syslog: yes\\是否啟用rsyslog日志,這里已經啟用corosync自己記錄日志功能,所有這里注釋掉,不使用 debug: off \\debug日志;只有在調試的時候開啟 timestamp: on \\記錄日志的時間戳,會有IO操作,根據實際情況開啟 logger_subsys { subsys: AMF \\默認即可 debug: off \\默認即可 } } service { \\定義pacemaker以corosync的插件方式運行 ver: 0 \\版本;定義pacemaker的版本 name: pacemaker \\定義名稱 } aisexec { \\定義運行的用戶和用戶組 user: root group: root }
生成秘鑰
[root@web1 corosync]# corosync-keygen \\這里需要輸入隨機數,用于秘鑰生成
復制相同的配置文件與秘鑰給web2服務器
[root@web1 corosync]# scp -p authkey corosync.conf web2:/etc/corosync
啟動corosync
[root@web1 corosync]# service corosync start
[root@web2 ~]# service corosync start
查看啟動日志;確保corosync已正常啟動
[root@web1 corosync]# grep -e "Corosync Cluster Engine" -e "configuration file" /var//log/cluster/corosync.log Jun 29 11:52:06 corosync [MAIN ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service. Jun 29 11:52:06 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
[root@web1 corosync]# grep pcmk_startup /var//log/cluster/corosync.log Jun 29 11:52:06 corosync [pcmk ] info: pcmk_startup: CRM: Initialized Jun 29 11:52:06 corosync [pcmk ] Logging: Initialized pcmk_startup Jun 29 11:52:06 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615 Jun 29 11:52:06 corosync [pcmk ] info: pcmk_startup: Service: 9 Jun 29 11:52:06 corosync [pcmk ] info: pcmk_startup: Local hostname: web1
由于試驗是沒有stonith設備所以此錯誤可以忽略 [root@web1 corosync]# grep NOTE /var//log/cluster/corosync.log Jun 29 11:52:29 [1765] web1 pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Jun 29 11:52:29 [1765] web1 pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Jun 29 11:52:52 [1765] web1 pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Jun 29 11:52:53 [1765] web1 pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Jun 29 11:52:53 [1765] web1 pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
此錯誤信息可以忽略 [root@web1 corosync]# grep ERROR /var//log/cluster/corosync.log Jun 29 11:52:06 corosync [pcmk ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon. Jun 29 11:52:06 corosync [pcmk ] ERROR: process_ais_conf: Please see Chapter 8 of 'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN Jun 29 11:52:29 [1765] web1 pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues. Jun 29 11:52:29 [1765] web1 pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues. Jun 29 11:52:52 [1765] web1 pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues. Jun 29 11:52:53 [1765] web1 pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues. Jun 29 11:52:53 [1765] web1 pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues. Jun 29 11:53:18 [1766] web1 crmd: error: do_log: FSA: Input I_ERROR from crmd_node_update_complete() received in state S_IDLE Jun 29 11:53:18 [1766] web1 crmd: notice: do_state_transition: State transition S_IDLE -> S_RECOVERY [ input=I_ERROR cause=C_FSA_INTERNAL origin=crmd_node_update_complete ] Jun 29 11:53:19 corosync [pcmk ] ERROR: pcmk_wait_dispatch: Child process crmd exited (pid=1766, rc=201)
此信息是提示沒有stonith設備,可以忽略 [root@web1 corosync]# crm_verify -L -V error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Errors found during check: config not valid
查看監聽端口
[root@web1 corosync]# ss -upln State Recv-Q Send-Q Local Address:Port Peer Address:Port UNCONN 0 0 *:647 *:* users:(("portreserve",934,6)) UNCONN 0 0 172.16.2.12:5404 *:* users:(("corosync",1756,13)) UNCONN 0 0 172.16.2.12:5405 *:* users:(("corosync",1756,14)) UNCONN 0 0 237.255.10.1:5405 *:* users:(("corosync",1756,10))
安裝crmsh
添加yum源: [root@web1 corosync]# vim /etc/yum.repos.d/crmsh.repo [suse_crmsh] name=crmsh baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/ enabled=1 gpgcheck=0 [root@web1 corosync]# yum -y install crmsh
[root@web1 corosync]# scp /etc/yum.repos.d/crmsh.repo web2:/etc/yum.repos.d/ [root@web2 ~]# yum -y install crmsh
四、配置高可用的http
配置文件共享
[root@nfs ~]# vim /etc/exports /www 172.16.2.0/24(rw,no_root_squash) [root@nfs ~]# mkdir /www [root@nfs ~]# echo "NFS" > /www/index.html [root@nfs ~]# service nfs start web1掛載測試: [root@web1 ~]# mount -t nfs 172.16.2.13:/www /mnt [root@web1 ~]# cat /mnt/index.html NFS [root@web1 ~]# umount /mnt web2掛載測試: [root@web2 ~]# mount -t nfs 172.16.2.13:/www /mnt [root@web2 ~]# cat /mnt/index.html NFS [root@web2 ~]# umount /mnt
修改全局屬性
[root@web1 ~]# crm crm(live)# configure crm(live)configure# property stonith-enabled=false \\禁用stonith crm(live)configure# verify crm(live)configure# property no-quorum-policy=ignore \\如果是雙節點,當沒有法定票數時對資源的處理 crm(live)configure# verify crm(live)configure# property default-resource-stickiness=200 \\對資源的粘性 crm(live)configure# verify crm(live)configure# commit
配置vip資源:
crm(live)configure# primitive haweb_vip ocf:heartbeat:IPaddr params ip="172.16.2.10" nic="eth0" cidr_netmask="24" broadcast="172.16.2.10" op monitor interval=10s timeout=20s crm(live)configure# verify crm(live)configure# commit
配置nfs資源:
crm(live)configure# primitive haweb_nfs ocf:heartbeat:Filesystem params device="172.16.2.13:/www" directory="/var/www/html/" fstype="nfs" op monitor timeout=40s interval=20s op start timeout=60s op stop timeout=60s crm(live)configure# verify crm(live)configure# commit
配置http資源:
crm(live)configure# primitive haweb_http lsb:httpd op monitor timeout=15s interval=15s crm(live)configure# verify crm(live)configure# commit
定義順序約束
crm(live)configure# order haweb_nfs_after_haweb_vip inf: haweb_vip haweb_nfs crm(live)configure# verify crm(live)configure# commit crm(live)configure# order haweb_http_after_haweb_nfs inf: haweb_nfs haweb_http crm(live)configure# verify crm(live)configure# commit
配置為組資源
crm(live)configure# group hawww haweb_vip haweb_nfs haweb_http crm(live)configure# verify crm(live)configure# commit
查看運行狀態
crm(live)# status Last updated: Mon Jun 29 14:24:34 2015 Last change: Mon Jun 29 14:23:52 2015 Stack: classic openais (with plugin) Current DC: web2 - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 3 Resources configured Online: [ web1 web2 ] Resource Group: hawww haweb_vip (ocf::heartbeat:IPaddr): Started web1 haweb_nfs (ocf::heartbeat:Filesystem): Started web1 haweb_http (lsb:httpd): Started web1
五、測試訪問
切換主機,再次訪問:
crm(live)# node standby crm(live)# status Last updated: Mon Jun 29 14:28:00 2015 Last change: Mon Jun 29 14:27:37 2015 Stack: classic openais (with plugin) Current DC: web2 - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 3 Resources configured Node web1: standby Online: [ web2 ] Resource Group: hawww haweb_vip (ocf::heartbeat:IPaddr): Started web2 haweb_nfs (ocf::heartbeat:Filesystem): Started web2 haweb_http (lsb:httpd): Started web2
原創文章,作者:馬行空,如若轉載,請注明出處:http://www.www58058.com/5764