一、實驗圖
二、準備實驗環境:
1)確保sql服務器之間可以基于主機名通信
[root@SQL1 ~]# vim /etc/hosts 172.16.2.13 SQL1.linux.com SQL1 172.16.2.14 SQL2.linux.com SQL2
[root@SQL2 ~]# vim /etc/hosts 172.16.2.13 SQL1.linux.com SQL1 172.16.2.14 SQL2.linux.com SQL2
2)確保時間同步
[root@SQL1 ~]# crontab -e */2 * * * * /usr/sbin/ntpdate 172.16.2.15
[root@SQL2 ~]# crontab -e */2 * * * * /usr/sbin/ntpdate 172.16.2.15
3)確??梢曰趕sh秘鑰通信
[root@SQL1 ~]# ssh-keygen -P '' [root@SQL1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@172.16.2.14 root@172.16.2.14's password:
[root@SQL2 ~]# ssh-keygen -P '' [root@SQL2 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@172.16.2.13 root@172.16.2.13's password:
4)測試
[root@SQL1 ~]# date; ssh SQL2 'date'; Wed Jul 1 10:59:40 CST 2015 Wed Jul 1 10:59:40 CST 2015
[root@SQL2 ~]# date; ssh SQL1 'date' Wed Jul 1 11:00:32 CST 2015 Wed Jul 1 11:00:33 CST 2015
二、安裝corosync;pacemaker
安裝corosync
[root@SQL1 ~]# yum -y install corosync [root@SQL2 ~]# yum -y install corosync
安裝pacemaker
[root@SQL1 ~]# yum -y install pacemaker [root@SQL2 ~]# yum -y install pacemaker
配置corosync
[root@SQL1 ~]# cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf [root@SQL1 ~]# vim /etc/corosync/corosync.conf compatibility: whitetank \\兼容舊版本的corosync totem { \\定義心跳信息傳遞信息 version: 2 \\corosync版本2 secauth: on \\開啟秘鑰認證;默認是關閉 threads: 0 \\定義啟動幾個線程傳遞心跳信息 interface { ringnumber: 0 \\其實號 bindnetaddr: 172.16.2.0 \\綁定在那個網絡地址,注意是網絡地址;不是ip地址 mcastaddr: 235.250.10.10 \\定義組播地址 mcastport: 5405 \\定義組播地址端口號 ttl: 1 } } logging { fileline: off \\默認即可 to_stderr: no \\是否將錯誤日志輸出到終端,默認為no,關閉 to_logfile: yes \\啟用日志文件 logfile: /var/log/cluster/corosync.log \\日志文件存放位置 debug: off \\是否開啟debug日志信息 timestamp: on \\是否開啟日志記錄時間戳,默認為開啟狀態;會產生IO logger_subsys { subsys: AMF debug: off } } service { \\定義pacemaker服務 ver: 0 name: pacemaker } aisexec{ \\定義運行用戶和組 user: root group: root }
生成秘鑰
[root@SQL1 ~]# corosync-keygen \\此時需要輸入數據,產生隨機數,建議下載文件以便快速產生隨機數
將配置文件和秘鑰復制給SQL2
[root@SQL1 ~]# scp -p /etc/corosync/{authkey,corosync.conf} SQL2://etc/corosync/
啟動corosync
[root@SQL1 ~]# service corosync start [root@SQL2 ~]# service corosync start
查看日志確保corosync正常啟動
[root@SQL1 ~]# grep -e "Corosync Cluster Engien" -e "configuration file" /var/log/cluster/corosync.log Jul 01 11:04:26 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'
[root@SQL1 ~]# grep TOTEM /var/log/cluster/corosync.log Jul 01 11:04:26 corosync [TOTEM ] Initializing transport (UDP/IP Multicast). Jul 01 11:04:26 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Jul 01 11:04:26 corosync [TOTEM ] The network interface [172.16.2.13] is now up. Jul 01 11:04:26 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed. Jul 01 11:04:42 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
此錯誤日志信息可以忽略 root@SQL1 ~]# grep ERROR /var/log/cluster/corosync.log Jul 01 11:04:26 corosync [pcmk ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon. Jul 01 11:04:26 corosync [pcmk ] ERROR: process_ais_conf: Please see Chapter 8 of 'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN Jul 01 11:04:50 [3996] SQL1.linux.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues. Jul 01 11:04:50 [3996] SQL1.linux.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues. Jul 01 11:04:50 [3996] SQL1.linux.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
[root@SQL1 ~]# grep pcmk_startup /var/log/cluster/corosync.log Jul 01 11:04:26 corosync [pcmk ] info: pcmk_startup: CRM: Initialized Jul 01 11:04:26 corosync [pcmk ] Logging: Initialized pcmk_startup Jul 01 11:04:26 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615 Jul 01 11:04:26 corosync [pcmk ] info: pcmk_startup: Service: 9 Jul 01 11:04:26 corosync [pcmk ] info: pcmk_startup: Local hostname: SQL1.linux.com
安裝crmsh(配置yum源:http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/)
[root@SQL1 ~]# vim /etc/yum.repos.d/crmsh.repo [crmsh] name=crmsh baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/ enabled=1 gpgcheck=0 [root@SQL1 ~]# yum -y install crmsh
三、安裝配置iscsi:提前在iscsi服務端準備好一塊磁盤,用于iscsi使用
安裝server端:(172.16.2.12)
[root@iscsi ~]# yum -y install scsi-target-utils root@iscsi ~]# service tgtd start \\啟動服務 Starting SCSI target daemon: [ OK ]
安裝client端
[root@SQL1 ~]# yum -y install iscsi-initiator-utils [root@SQL1 ~]# service iscsi start \\用于發現iscsi設備的腳本 [root@SQL1 ~]# service iscsid start \\iscsi服務啟動腳本 [root@SQL2 ~]# yum -y install iscsi-initiator-utils [root@SQL2 ~]# service iscsi start [root@SQL2 ~]# service iscsid start
服務端配置:
兩類方式
第一類:編輯/etc/tgt/targets.conf,編輯配置文件生成的iscsi target系統重啟之后不會丟失
第二類:使用tgtadm全命令工具創建;此命令工具配置的iscsi target系統重啟之后會丟失
在這里選擇使用tgtadm命令行工具:
[root@iscsi ~]# tgtadm -L iscsi -m target -o new -t 1 -T iqn.2015-07.com.mylinux:t1 \\創建target;命令幫助tgtadm -h [root@iscsi ~]# tgtadm -L iscsi -m target -o show \\查看創建的target Target 1: iqn.2015-07.com.mylinux:t1 \\target名稱,以及標識號1 System information: \\系統信息 Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 \\邏輯單元號;0默認保留 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No \\移除 Prevent removal: No \\阻止 Readonly: No Backing store type: null \\塊設備類型 Backing store path: None \\提供塊設備的位置 Backing store flags: \\塊設備標記 Account information: \\授權用戶訪問 ACL information: \\授權ip地址段訪問 [root@iscsi ~]# tgtadm -L iscsi -m logicalunit -o new -t 1 -l 1 -b /dev/sdb1 \\在target中添加磁盤設備 [root@iscsi ~]# tgtadm -L iscsi -m target -o bind -t 1 -I 172.16.2.0/24 \\給target授權,默認不允許任何用戶訪問 [root@iscsi ~]# tgtadm -L iscsi -m target -o show \\再次查看target信息 Target 1: iqn.2015-07.com.mylinux:t1 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 \\邏輯單元號 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 10742 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr \\類型 Backing store path: /dev/sdb1 \\已添加的設備 Backing store flags: Account information: ACL information: 172.16.2.0/24 \\授權信息
iscsi客戶端配置
[root@SQL1 ~]# echo "InitiatorName=`iscsi-iname -p iqn.2015-07.com.sql1`" > /etc/iscsi/initiatorname.iscsi \\重新命名 [root@SQL1 ~]# cat /etc/iscsi/initiatorname.iscsi \\查看iscsi名稱 InitiatorName=iqn.2015-07.com.sql1:97b0de58129 \\具體名稱
[root@SQL2 ~]# echo "InitiatorName=`iscsi-iname -p iqn.2015-07.com.sql2`" > /etc/iscsi/initiatorname.iscsi [root@SQL2 ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2015-07.com.sql2:313bbc508b59
在客戶端查找iscsi設備
[root@SQL1 ~]# iscsiadm -m discovery -t st -p 172.16.2.12 \\iscsiadm具體命令請查看幫助手冊 Starting iscsid: [ OK ] 172.16.2.12:3260,1 iqn.2015-07.com.mylinux:t1 \\已經找到iscsi提供的設備
[root@SQL2 ~]# iscsiadm -m discovery -t st -p 172.16.2.12 Starting iscsid: [ OK ] 172.16.2.12:3260,1 iqn.2015-07.com.mylinux:t1
登錄查找到的設備
[root@SQL1 ~]# iscsiadm -m node -T iqn.2015-07.com.mylinux:t1 -p 172.16.2.12 -l Logging in to [iface: default, target: iqn.2015-07.com.mylinux:t1, portal: 172.16.2.12,3260] (multiple) Login to [iface: default, target: iqn.2015-07.com.mylinux:t1, portal: 172.16.2.12,3260] successful. 使用fdisk -l可以查到本地已經多了一塊磁盤設備 [root@SQL1 ~]# fdisk -l | grep "/dev/sd[a-z]" Disk /dev/sda: 42.9 GB, 42949672960 bytes /dev/sda1 * 1 64 512000 83 Linux /dev/sda2 64 5222 41430016 8e Linux LVM Disk /dev/sdb: 10.7 GB, 10742183424 bytes \\此設備就是iscsi服務端提供的塊設備 對此設備塊分區格式化結果如下: [root@SQL1 ~]# fdisk -l | grep "/dev/sd[a-z][0-9]" /dev/sda1 * 1 64 512000 83 Linux /dev/sda2 64 5222 41430016 8e Linux LVM /dev/sdb1 1 10244 10489840 83 Linux \\分區格式化后的結果
就先到這里,欲知后事如何請看下集分曉
原創文章,作者:馬行空,如若轉載,請注明出處:http://www.www58058.com/5883
贊
@tars:謝謝
請問有沒有遇到這種情況
[root@lab2 ~]# crm
crm(live)# conERROR: running cibadmin -Ql: Could not establish cib_rw connection: Connection refused (111)
Signon to CIB failed: Transport endpoint is not connected
Init failed, could not perform requested operations