本節索引:
一、keepalived介紹
二、keepalived服務配置
三、實驗:實現keepalived主備模型
四、實驗:實現keepalived雙主模型
五、實驗:使用keepalived實現ipvs高可用集群
六、實驗:實現雙主模型的ipvs高可用集群
七、實驗:使用keepalived實現Nginx高可用
八、實驗:使用keepalived實現HAProxy高可用
一、keepalived介紹
Keepalived是基于vrrp協議的一款高可用軟件。它的作用是檢測服務器的狀態,如果有一臺web服務器
宕機,或工作出現故障,Keepalived將檢測到,并將有故障的服務器從系統中剔除,同時使用其他服務器
代替該服務器的工作,當服務器工作正常后Keepalived自動將服務器加入到服務器群中,這些工作全部自
動完成,不需要人工干涉,需要人工做的只是修復故障的服務器。
vrrp協議:Virtual Redundant Routing Protocol
相關術語:
虛擬路由器:Virtual Router
虛擬路由器標識:VRID(0-255)
物理路由器:
master:主設備
backup:備用設備
priority:優先級
?? VIP:Virtual IP
??? VMAC:Virutal MAC (00-00-5e-00-01-VRID)
GraciousARP
通告:心跳,優先級等;周期性;
搶占式,非搶占式;
工作模式:
主/備:單虛擬路徑器;
主/主:主/備(虛擬路徑器1),備/主(虛擬路徑器2)
常見實現高可用集群軟件:
keepalived
corosync
failover:故障切換,或稱故障轉移,當集群內某節點不再發送心跳信息,leader將決定轉移到另外一個
節點
failback:故障切回,新上線的節點重新切回
keepalived
vrrp協議的軟件實現,原生設計的目的為了高可用ipvs服務:
基于vrrp協議完成地址流動;
為vip地址所在的節點生成ipvs規則(在配置文件中預先定義);
為ipvs集群的各RS做健康狀態檢測;
基于腳本調用接口通過執行腳本完成腳本中定義的功能,進而影響集群事務;
組件:
核心組件:
? ? ? ? ? ? ? ? vrrp stack
? ? ? ? ? ? ? ? ipvs wrapper
? ? ? ? ? ? ? ? checkers
控制組件:配置文件分析器
IO復用器
內存管理組件
keepalive結構圖:
HA Cluster的配置前提:
(1) 各節點時間必須同步;
ntp, chrony
(2) 確保iptables及selinux不會成為阻礙;
(3) 各節點之間可通過主機名互相通信(對KA并非必須);
建議使用/etc/hosts文件實現;
(4) 確保各節點的用于集群服務的接口支持MULTICAST通信;
D類:224-239;
二、Keepalived服務配置
Keepalived:CentOS 6.4+ 隨base倉庫提供;
程序環境:
主配置文件:/etc/keepalived/keepalived.conf
主程序文件:/usr/sbin/keepalived
Unit File:keepalived.service
Unit File的環境配置文件:/etc/sysconfig/keepalived
?配置文件組件部分:
TOP HIERACHY
GLOBAL CONFIGURATION
Global definitions
Static routes/addresses
VRRPD CONFIGURATION
VRRP synchronization group(s):vrrp同步組;
VRRP instance(s):每個vrrp instance即一個vrrp路由器;
LVS CONFIGURATION
Virtual server group(s)
Virtual server(s):ipvs集群的vs和rs;
三、實驗:實現keepalived主備模型
前期準備:
5臺虛擬機
keepalived1:192.168.30.10????? 系統:CentOS 7.4
keepalived2:192.168.30.18????? 系統:CentOS 7.4
RS1:192.168.30.27????? 系統:CentOS 7.4
RS2:192.168.30.17????? 系統:CentOS 7.4
Client:192.168.30.16????? 系統:CentOS 7.4
具體步驟:
keepalived1,keepalived2端:
安裝keepalived服務
yum install keepalived
keepalived1操作:
修改主配置文件
vim /etc/keepalived/keepalived.conf
重啟keepalived服務
systemctl restart keepalived
keepalived2:
修改主配置文件:
vim /etc/keepalived/keepalived.conf
重啟keepalived服務
systemctl restart keepalived
查看keepalived1端的IP地址:
可看到MASTER獲得了虛擬路由地址
模擬MASTER端網卡down
ifconfig ens33 down
此時查看keepalived2端的IP地址,發現虛擬IP地址已游走到BACKUP
當重新啟用MASTER的網卡時,虛擬地址將重新游回到MASTER端
四、實驗:實現keepalived雙主模型
實驗環境:承接主備模型實驗
具體步驟:
keepalived1端操作:
在單主模型配置基礎上添加一個新的虛擬路由配置:
vim /etc/keepalived/keepalived.conf
重啟keepalived服務
systemctl restart keepalived
開啟httpd服務
echo keepalived1 > /var/www/html/index.html
systemctl restart httpd
keepalived2端操作:
在單主模型配置基礎上也添加一個對應的新虛擬路由配置:
vim /etc/keepalived/keepalived.conf
重啟keepalived服務
systemctl restart keepalived
開啟httpd服務
echo keepalived2 > /var/www/html/index.html
systemctl restart httpd
此時keepalived1與keepalived2分別擁有一個虛擬IP地址:
keepalived1:
keepalived2:
客戶端測試:
模擬keepalived2下線
systemctl stop keepalived
此時查看keepalived1的IP地址:發現兩個虛擬IP都來到keepalived1端
客戶端再次發起測試:
五、實驗:實現高可用LVS-DR
前期準備:
5臺虛擬機
keepalived1:192.168.30.10????? 系統:CentOS 7.4
keepalived2:192.168.30.18????? 系統:CentOS 7.4
RS1:192.168.30.27????? 系統:CentOS 7.4
RS2:192.168.30.17????? 系統:CentOS 7.4
Client:192.168.30.16????? 系統:CentOS 7.4
keepalived配置承接雙主模型實驗配置
具體步驟:
keepalived1,keepalived2中添加下列配置
vim /etc/keepalived/keepalived.conf
重啟keepalived服務
systemctl restart keepalived
RS1端操作:
綁定VIP地址
[root@RS1 ~]#echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[root@RS1 ~]#echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
[root@RS1 ~]#echo 2> /proc/sys/net/ipv4/conf/lo/arp_announce
[root@RS1 ~]#echo 2> /proc/sys/net/ipv4/conf/all/arp_announce
[root@RS1 ~]# ip addr a 192.168.30.111/32 dev lo
開啟web服務
echo R1 > /var/www/html/index.html
systemctl restart httpd
RS2端操作:?
[root@RS2 ~]#echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[root@RS2 ~]#echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
[root@RS2 ~]#echo 2> /proc/sys/net/ipv4/conf/lo/arp_announce
[root@RS2 ~]#echo 2> /proc/sys/net/ipv4/conf/all/arp_announce
[root@RS2 ~]#ip addr a 192.168.30.111/32 dev lo
開啟web服務
echo R1 > /var/www/html/index.html
systemctl restart httpd
此時在keepalived1,2端可看到lvs規則已生效:
客戶端測試訪問:
模擬keepalived2停止服務
systemctl stop keepalived
抓包keepalived1,可見keepalived將繼續保持VIP的正常運行
模擬RS1停止web服務
systemctl stop httpd
此時查看lvs規則中,RS1已自動剔除
當兩臺REAL SERVER都下線時,將自動轉向到本機的SORRY SERVER
模擬RS2停止web服務
systemctl stop httpd
客戶端訪問將指向SORRY SERVER
六、實驗:實現雙主模型的LVS-DR高可用集群
實驗環境及前期準備承接實現高可用LVS-DR實驗中的環境
在keepalived1及keepalived2端的主配置文件中添加一端新的virtual_server配置
vim /etc/keepalived/keepalived.conf
重啟keepalived服務
systemctl restart keepalived.service
查看LVS規則:
將192.168.30.222地址分別綁定到RS1,RS2的lo網卡上
ip addr a 192.168.30.222/32 dev lo
客戶端訪問測試:
七、實驗:使用keepalived實現Nginx高可用
前期準備:
5臺虛擬機
keepalived1:192.168.30.10????? 系統:CentOS 7.4
keepalived2:192.168.30.18????? 系統:CentOS 7.4
RS1:192.168.30.27????? 系統:CentOS 7.4
RS2:192.168.30.17????? 系統:CentOS 7.4
Client:192.168.30.16????? 系統:CentOS 7.4
具體步驟:
分別在keepalived1及keepalived2上搭建代理功能:
vim /etc/nginx/conf.d/proxy.conf
重啟nginx服務
systemctl start nginx.service
客戶端測試:
配置keepalived
分別配置keepalived1及keepalived2的主配置文件內容如下:
重啟keepalived服務
systemctl restart keepalived.service
此時keepalived1網卡地址如下:
keepalived2由于為BACKUP,網卡無虛擬IP地址:
模擬keepalived1,即MASTER服務停止:
systemctl stop keepalived.service
此時虛擬IP地址192.168.30.111游走到keepalived2網卡上
keepalived2仍可正常反向代理至RS的訪問:
八、實驗:使用keepalived實現HAProxy高可用
同理,我們先再兩臺keepalived服務器上配置haproxy的反向代理
具體步驟:
yum install haproxy
vim /etc/keepalived/keepalived.conf
測試兩臺keepalived上的haproxy反向代理功能已成功啟動
只需將nginx高可用keepalived的腳本中的nginx換成haproxy即可
本文來自投稿,不代表Linux運維部落立場,如若轉載,請注明出處:http://www.www58058.com/102861