keepalive高可用haproxy實現動靜分離URL資源
實現要點:
(1) 動靜分離discuzx,動靜都要基于負載均衡實現;
(2) 進一步測試在haproxy和后端主機之間添加varnish緩存;
(3) 給出拓撲設計;
(4) haproxy的設定要求:
(a) 啟動stats;
(b) 自定義403、502和503的錯誤頁;
(c) 各組后端主機選擇合適的調度方法;
(d) 記錄好日志;
(e) 使用keepalived高可用haproxy;
拓撲結構
-
兩臺keepalived的雙主模型對兩臺haproxy主機做高可用,兩個
VIP
分別為10.1.253.11與10.1.253.12 -
haproxy主機負責接收請求、動靜分離請求的圖片資源、調度單臺varnish緩存主機及兩臺httpd主機
-
varnish緩存主機負責緩存后端nginx服務器響應的用戶上傳的靜態圖片資源,并調度兩臺nginx主機
-
nginx主機負責響應圖片資源,并為websrv主機提供nfs服務,映射為discuzx程序
attachment
目錄, -
websrv安裝httpd、mysql、php程序,處理discuzx程序的動態資源及未分離的靜態資源,如css等
nginx服務器配置nfs服務
安裝nfs服務
1.yum install nfs-utils
配置nfs共享
/etc/exports
1./data/discuz 10.1.253.66(rw,no_root_squash) 10.1.253.67(rw,no_root_squash)
創建apache用戶,并授權
1.useradd -s -u 48 -g 48 apache
2.setfacl -m u:apache:rwx /data/discuz
啟動nfs服務
1.systemctl start nfs.service
websrvs主機配置
安裝amp程序和discuzx
關鍵步驟:
1.yum install httpd mysql php php-mysql php-xcache
2.mysql -uroot -p -e 'CREATE DATABASE ultrax;GRANT ALL ON ultrax.* TO ultraxuser@10.1.%.% IDENTIFIED BY "ultraxpass";FLUSH PRIVILEGES'
掛載nfs到用戶上傳附件路徑
啟動mysql、httpd并訪問測試
nginx主機配置
nginx負責響應用戶上傳的靜態圖片資源,nginx的虛擬主機root路徑指向nfs共享的目錄即可。
為了能夠將URL的資源路徑映射為虛擬主機的root路徑下所對應的資源,應使用nginx對請求的URL重寫或重定向,顯然可在最前端的haproxy主機或varnish服務或nginx服務都能夠實現URL的重寫,只要確保新的URL能夠映射到nginx主機下該資源的路徑即可。沒有必要同時在haproxy、varnish、nginx都重寫同一URL,考慮到后端主機的數量,我覺得可以在haproxy或varnish中重寫URL。
安裝nginx
配置虛擬主機
1.server {
2. listen 82;
3. server_name localhost;
4. location / {
5. root /data/discuz;
6. index index.html index.htm;
7. }
8. location ~* \.(jpg|jpeg|gif|png)$ {
9. root /data/discuz;
10. rewrite ^/.*forum/(.*)$ /$1 break;
11. }
12.}
啟動nginx服務并訪問測試
-
某資源的URL源路徑
-
替換該URL的host為nginx主機,直接訪問該URL
-
nginx服務器輸出日志
1.10.1.250.19 - - [13/Nov/2016:9:01:53 +0800] "GET /data/attachment/forum/201611/12/174905kkys2e2wgmv25ywe.jpg HTTP/1.1" 200 126931 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36" "-"
varnish緩存服務器
varnish服務器負責緩存響應資源,并調度nginx服務器,以及檢測nginx服務的健康狀態等
安裝varnish
安裝前需配置好epel的yum源
1.yum install varnish
配置緩存服務
配置varnish運行時參數
/etc/varnish/varnish.params
1.VARNISH_LISTEN_PORT=80
2.……
3.VARNISH_STORAGE="malloc,128M"
配置varnish緩存服務
前面說過,對URL的重寫可在varnish服務器中實現,對于有眾多后端nginx主機的情況,在varnish重寫URL更加方便;
在varnish中重寫URL使用regsub函數實現;
為了避免與nginx服務器端的URL重寫混淆,應把nginx虛擬主機配置中的URL重寫注釋;
/etc/varnish/default.vcl
1.vcl 4.0;
2.import directors;
……
13.backend nginx1 {
14. .host = "10.1.253.29";
15. .port = "81";
16. .probe = ok;
17.}
18.backend nginx2 {
19. .host = "10.1.253.29";
20. .port = "82";
22.}
23.sub vcl_init {
24. new RR = directors.round_robin();
25. RR.add_backend(nginx1);
26. RR.add_backend(nginx2);
27.}
28.
29.sub vcl_recv {
30. set req.backend_hint = RR.backend();
31. if (req.url ~ "(?i)\.(jpg|jpeg|gif|png)$") {
32. set req.url = regsub(req.url, "/.*attachment/(.*)", "/\1");
33. }
43.}
44.
48.sub vcl_backend_response {
49. # Happens after we have read the response headers from the backend.
50. #
51. # Here you clean the response headers, removing silly Set-Cookie headers
52. # and other mistakes your backend does.
53.}
54.
55.sub vcl_deliver {
61.}
啟動varnish并訪問測試
1.systemctl start varnish
-
訪問varnish服務器下該資源的URL
-
nginx服務器端的訪問日志
1.10.1.253.29 - - [13/Nov/2016:22:21:43 +0800] "GET /forum/201611/12/174905kkys2e2wgmv25ywe.jpg HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36" "10.1.250.19"
效果就是:只要重寫的路徑下存在該資源,無論URL中該資源的前的路徑是什么,都統統能夠重寫為自定義路徑下的相同資源
haproxy主機配置
安裝
1.yum install haproxy
配置文件
配置文件路徑:/etc/haproxy/haproxy.cfg
主要是定義前端和后端的配置,其中前端基于acl對URI進行匹配控制:
url_static_geg條件為URI的起始路徑,url_static_end條件為URI的后綴名
只有同時滿足以上兩個條件才調用static主機組,其余的URL使用默認的dynamic主機組
此外,還定義了錯誤響應碼的重定向到另一主機的
以及開啟了haproxy的stats頁面
配置frontend前端
1.frontend main *:80
2. acl url_static_beg path_beg -i /data/attachment
3. acl url_static_end path_end -i .jpg .gif .png .css .js
4.
5. use_backend static if url_static_beg url_static_end
6.
7. default_backend dynamic
8.
9. errorloc 503 http://10.1.253.29:82/errorpage/503sorry.html
10. errorloc 403 http://10.1.253.29:82/errorpage/403sorry.html
11. errorloc 502 http://10.1.253.29:82/errorpage/502sorry.html
配置backend后端
1.backend dynamic
2. balance roundrobin
4. server web1 10.1.253.66:81 check cookie amp1
5. server web2 10.1.253.66:82 check cookie amp2
6.
7.backend static
8. balance roundrobin
9. server ngx1 10.1.253.29:81 check
10. server ngx2 10.1.253.29:82 check
配置stats頁面
1.listen stats
2. bind :
3. stats enable
4. stats uri /admin?stats
……
7. stats refresh 10s
8. stats admin if TRUE
9. stats hide-version
測試結果
keepalived高可用haproxy服務
安裝
1.yum install keepalived
配置雙主模型的keepalived主機
/etc/keepalived/keepalived.conf
1.global_defs {
2. notification_email {
3. root@localhost
4. }
5. notification_email_from keepalived@jasonmc.com
6. smtp_server localhost
7. smtp_connect_timeout 30
8. router_id node1
9. vrrp_mcast_group4 224.22.29.1
10.}
11.vrrp_script chk_down {
12. script "[[ -f /etc/keepalived/down ]]&& exit 1 || exit 0"
13. interval 1
14. weight -5
15.}
16.vrrp_script chk_haproxy {
17. script "killall -0 haproxy && exit 0 || exit 1"
18. interval 1
19. weight -5
20.}
21.vrrp_instance VI_1 {
22. state MASTER
23. interface eno16777736
24. virtual_router_id 10
25. priority 96
26. advert_int 10
27. authentication {
28. auth_type PASS
29. auth_pass 1a7b2ce6
30. }
31. virtual_ipaddress {
32. 10.1.253.11 dev eno16777736
33. }
34. track_script {
35. chk_down
36. chk_haproxy
37. }
38.}
39.vrrp_instance VI_2 {
40. state BACKUP
41. interface eno16777736
42. virtual_router_id 11
43. priority 100
44. advert_int 11
45. authentication {
46. ……
48. }
啟動keepalived服務并測試
1.systemctl start keepalived
haproxy1與haproxy2同時上線時
-
haproxy1擁有
VIP1
10.1.253.11 -
-
haproxy2擁有
VIP2
10.1.253.12 -
觸發haproxy1下線操作
VI_1(即haproxy1)上在/etc/keepalived/目錄下建立down
文件,讓keepalived的track_script功能檢測到此文件并實現下線功能。
-
haproxy1上關于keepalived的日志輸出:
1.Nov 14 13:18:55 h1 Keepalived_vrrp[54901]: VRRP_Script(chk_down) failed
2.Nov 14 13:19:01 h1 Keepalived_vrrp[54901]: VRRP_Instance(VI_1) Received higher prio advert
3.Nov 14 13:19:01 h1 Keepalived_vrrp[54901]: VRRP_Instance(VI_1) Entering BACKUP STATE
4.Nov 14 13:19:01 h1 Keepalived_vrrp[54901]: VRRP_Instance(VI_1) removing protocol VIPs.
5.Nov 14 13:19:01 h1 Keepalived_healthcheckers[54900]: Netlink reflector reports IP10.1.253.11 removed
haproxy2主機無法收到haproxy1多播發送的HEARTBEAT信息,將成為VI_1的MASTER主機。
-
haproxy2上關于keepalived的日志輸出:
1.Nov 14 13:19:01 h1 Keepalived_vrrp[58092]: VRRP_Instance(VI_1) forcing a new MASTER election
2.Nov 14 13:19:01 h1 Keepalived_vrrp[58092]: VRRP_Instance(VI_1) forcing a new MASTER election
3.Nov 14 13:19:11 h1 Keepalived_vrrp[58092]: VRRP_Instance(VI_1) Transition to MASTER STATE
4.Nov 14 13:19:21 h1 Keepalived_vrrp[58092]: VRRP_Instance(VI_1) Entering MASTER STATE
5.Nov 14 13:19:21 h1 Keepalived_vrrp[58092]: VRRP_Instance(VI_1) setting protocol VIPs.
6.Nov 14 13:19:21 h1 Keepalived_vrrp[58092]: VRRP_Instance(VI_1) Sending gratuitousARPs on eno16777736 for 10.1.253.11
7.Nov 14 13:19:21 h1 Keepalived_healthcheckers[58091]: Netlink reflector reports IP10.1.253.11 added
8.Nov 14 13:19:26 h1 Keepalived_vrrp[58092]: VRRP_Instance(VI_1) Sending gratuitousARPs on eno16777736 for 10.1.253.11
-
haproxy2上將同時擁有
VIP1
與VIP2
觸發haproxy1重新上線操作
把VI_1(即haproxy1)中/etc/keepalived/目錄下down
文件移除,讓keepalived的track_script功能檢測不到此文件實現重新上線的功能。
-
haproxy1上關于keepalived的日志輸出:
1.Nov 14 13:58:02 h1 Keepalived_vrrp[67748]: VRRP_Script(chk_down) succeeded
2.Nov 14 13:58:12 h1 Keepalived_vrrp[67748]: VRRP_Instance(VI_1) Transition to MASTER STATE
3.Nov 14 13:58:22 h1 Keepalived_vrrp[67748]: VRRP_Instance(VI_1) Entering MASTER STATE
4.Nov 14 13:58:22 h1 Keepalived_vrrp[67748]: VRRP_Instance(VI_1) setting protocol VIPs.
5.Nov 14 13:58:22 h1 Keepalived_vrrp[67748]: VRRP_Instance(VI_1) Sending gratuitousARPs on eno16777736 for 10.1.253.11
6.Nov 14 13:58:22 h1 Keepalived_healthcheckers[67747]: Netlink reflector reports IP10.1.253.11 added
7.Nov 14 13:58:27 h1 Keepalived_vrrp[67748]: VRRP_Instance(VI_1) Sending gratuitousARPs on eno16777736 for 10.1.253.11
-
haproxy2上關于keepalived的日志輸出:
1.Nov 14 13:58:12 h1 Keepalived_vrrp[58092]: VRRP_Instance(VI_1) Received higher prio advert
2.Nov 14 13:58:12 h1 Keepalived_vrrp[58092]: VRRP_Instance(VI_1) Entering BACKUP STATE
3.Nov 14 13:58:12 h1 Keepalived_vrrp[58092]: VRRP_Instance(VI_1) removing protocol VIPs.
4.Nov 14 13:58:12 h1 Keepalived_healthcheckers[58091]: Netlink reflector reports IP10.1.253.11 removed
-
再次恢復為haproxy1擁有VIP1,haproxy2擁有VIP2
測試結果
-
同時訪問 VIP1 或 VIP2 都能正常訪問由haproxy代理的discuzx網站
-
對于用戶上傳的附件資源,由varnish服務器或nginx服務器進行響應
總結
HAProxy是一款純粹的高性能反向代理服務器,能夠代理應用層協議,也可以定義mode tcp讓代理傳輸層協議。HAProxy能夠代理HTTP協議和TCP協議,支持代理Web Server、Dynamic Engine、DateBase,且能夠檢測后端主機的健康狀態,實現后端主機的HA。其內建的stats管理頁能夠非常方便查看前端、后端主機的狀態,簡單的操作就能實現后端主機的上下線。
關于URL的重寫,上文中已經說明可在HAProxy代理服務器、Varnish緩存服務器或Nginx主機上實現,但為了便于管理較多的后端主機,通常選擇在HAProxy服務器或者Varnish緩存服務器上實現URL的重寫。
HAProxy代理服務器的單進程事件驅動模型使得其能夠處理大并發請求,使用彈性二叉樹算法存儲的連接會話能夠非常靈活的進行管理,對于后端主機調度算法也能做到非常精細。
原創文章,作者:helloc,如若轉載,請注明出處:http://www.www58058.com/58618