前言
在Tomcat集群中,當一個節點出現故障,其他節點該如何接管故障節點的Session信息呢?本文帶來的解決方案是基于MSM+Memcached實現Session共享。
相關介紹
MSM
MSM–Memcached Session Manager是一個高可用的Tomcat Session共享解決方案,除了可以從本機內存快速讀取Session信息(僅針對黏性Session)外,同時可使用Memcached存取Session,以實現高可用。
工作原理
Sticky Session(黏性) 模式下的工作原理
#Tomcat本地Session為主Session,Memcached 中的Session為備Session
安裝在Tomcat上的MSM使用本機內存保存Session,當一個請求執行完畢之后,如果對應的Session在本地不存在(即某用戶的第一次請求),則將該Session復制一份至Memcached;當該Session的下一個請求到達時,會使用Tomcat的本地Session,請求處理結束之后,Session的變化會同步更新到 Memcached,保證數據一致。
當集群中的一個Tomcat掛掉,下一次請求會被路由到其他Tomcat上。負責處理此此請求的Tomcat并不清楚Session信息,于是從Memcached查找該Session,更新該Session并將其保存至本機。此次請求結束,Session被修改,送回Memcached備份。
Non-sticky Session (非黏性)模式下的工作原理
#Tomcat本地Session為中轉Session,Memcached為主備Session
收到請求,加載備Session至本地容器,若備Session加載失敗則從主Session加載
請求處理結束之后,Session的變化會同步更新到Memcached,并清除Tomcat本地Session
實現過程
實驗拓撲
#系統環境:CentOS6.6
nginx安裝配置
#解決依賴關系 [root@scholar ~]# yum groupinstall "Development Tools" "Server Platform Deveopment" -y [root@scholar ~]# yum install openssl-devel pcre-devel -y [root@scholar ~]# groupadd -r nginx [root@scholar ~]# useradd -r -g nginx nginx [root@scholar ~]# tar xf nginx-1.6.3.tar.gz [root@scholar ~]# cd nginx-1.6.3 [root@scholar nginx-1.6.3]# ./configure \ > --prefix=/usr/local/nginx \ > --sbin-path=/usr/sbin/nginx \ > --conf-path=/etc/nginx/nginx.conf \ > --error-log-path=/var/log/nginx/error.log \ > --http-log-path=/var/log/nginx/access.log \ > --pid-path=/var/run/nginx/nginx.pid \ > --lock-path=/var/lock/nginx.lock \ > --user=nginx \ > --group=nginx \ > --with-http_ssl_module \ > --with-http_flv_module \ > --with-http_stub_status_module \ > --with-http_gzip_static_module \ > --http-client-body-temp-path=/usr/local/nginx/client/ \ > --http-proxy-temp-path=/usr/local/nginx/proxy/ \ > --http-fastcgi-temp-path=/usr/local/nginx/fcgi/ \ > --http-uwsgi-temp-path=/usr/local/nginx/uwsgi \ > --http-scgi-temp-path=/usr/local/nginx/scgi \ > --with-pcre [root@scholar nginx-1.6.3]# make && make install
為nginx提供SysV init腳本
[root@scholar ~]# vim /etc/rc.d/init.d/nginx #新建文件/etc/rc.d/init.d/nginx,內容如下: #!/bin/sh # # nginx - this script starts and stops the nginx daemon # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # config: /etc/sysconfig/nginx # pidfile: /var/run/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/etc/nginx/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/var/lock/subsys/nginx make_dirs() { # make required directories user=`nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -` options=`$nginx -V 2>&1 | grep 'configure arguments:'` for opt in $options; do if [ `echo $opt | grep '.*-temp-path'` ]; then value=`echo $opt | cut -d "=" -f 2` if [ ! -d "$value" ]; then # echo "creating" $value mkdir -p $value && chown -R $user $value fi fi done } start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 make_dirs echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop sleep 1 start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force- reload|configtest}" exit 2 esac
為腳本賦予執行權限
[root@scholar ~]# chmod +x /etc/rc.d/init.d/nginx
添加至服務管理列表,并讓其開機自動啟動
[root@scholar ~]# chkconfig --add nginx [root@scholar ~]# chkconfig nginx on
配置nginx
[root@scholar ~]# vim /etc/nginx/nginx.conf upstream www.scholar.com { server 172.16.10.123:8080; server 172.16.10.124:8080; } server { listen 80; server_name www.scholar.com; location / { proxy_pass http://www.scholar.com; index index.jsp index.html index.htm; } } [root@scholar ~]# service nginx start Starting nginx: [ OK ]
tomcat安裝配置
安裝jdk
[root@node1 ~]# rpm -ivh jdk-7u79-linux-x64.rpm [root@node1 ~]# vim /etc/profile.d/java.sh export JAVA_HOME=/usr/java/latest export PATH=$JAVA_HOME/bin:$PATH [root@node1 ~]# . /etc/profile.d/java.sh
安裝tomcat
[root@node1 ~]# tar xf apache-tomcat-7.0.62.tar.gz -C /usr/local/ [root@node1 ~]# cd /usr/local/ [root@node1 local]# ln -sv apache-tomcat-7.0.62/ tomcat [root@node1 local]# vim /etc/profile.d/tomcat.sh export CATALINA_HOME=/usr/local/tomcat export PATH=$CATALINA_HOME/bin:$PATH [root@node1 local]# . /etc/profile.d/tomcat.sh
提供腳本
[root@node1 local]# vim /etc/rc.d/init.d/tomcat #!/bin/sh # Tomcat init script for Linux. # # chkconfig: 2345 96 14 # description: The Apache Tomcat servlet/JSP container. # JAVA_OPTS='-Xms64m -Xmx128m' JAVA_HOME=/usr/java/latest CATALINA_HOME=/usr/local/tomcat export JAVA_HOME CATALINA_HOME case $1 in start) exec $CATALINA_HOME/bin/catalina.sh start ;; stop) exec $CATALINA_HOME/bin/catalina.sh stop;; restart) $CATALINA_HOME/bin/catalina.sh stop sleep 2 exec $CATALINA_HOME/bin/catalina.sh start ;; *) echo "Usage: `basename $0` {start|stop|restart}" exit 1 ;; esac [root@node1 local]# chmod +x /etc/rc.d/init.d/tomcat [root@node1 local]# chkconfig --add tomcat [root@node1 local]# chkconfig tomcat on #兩個tomcat節點都執行以上操作
訪問測試
準備測試頁
[root@node1 local]# cd tomcat/webapps/ [root@node1 webapps]# mkdir -pv test/WEB-INF/{classes,lib} [root@node1 webapps]# cd test/ [root@node1 test]# vim index.jsp <%@ page language="java" %> <html> <head><title>TomcatA</title></head> <body> <h1><font color="red">TomcatA.scholar.com</font></h1> <table align="centre" border="1"> <tr> <td>Session ID</td> <% session.setAttribute("scholar.com","scholar.com"); %> <td><%= session.getId() %></td> </tr> <tr> <td>Created on</td> <td><%= session.getCreationTime() %></td> </tr> </table> </body> </html> #另一個節點將TomcatA替換為TomcatB,顏色設為藍色 [root@node1 test]# service tomcat start
此時Session信息并不一致,接下來我們通過配置MSM實現Session共享
memcached安裝
#解決依賴關系 [root@scholar ~]# yum groupinstall "Development Tools" "Server Platform Deveopment" -y #安裝libevent #memcached依賴于libevent API,因此要事先安裝之 [root@scholar ~]# tar xf libevent-2.0.22-stable.tar.gz [root@scholar ~]# cd libevent-2.0.22-stable [root@scholar libevent-2.0.22-stable]# ./configure --prefix=/usr/local/libevent [root@scholar libevent-2.0.22-stable]# make && make install [root@scholar ~]# echo "/usr/local/libevent/lib" > /etc/ld.so.conf.d/libevent.conf [root@scholar ~]# ldconfig #安裝配置memcached [root@scholar ~]# tar xf memcached-1.4.24.tar.tar [root@scholar ~]# cd memcached-1.4.24 [root@scholar memcached-1.4.24]# ./configure --prefix=/usr/local/memcached --with-libevent=/usr/local/libevent [root@scholar memcached-1.4.24]# make && make install
提供腳本
[root@scholar ~]# vim /etc/init.d/memcached #!/bin/bash # # Init file for memcached # # chkconfig: - 86 14 # description: Distributed memory caching daemon # # processname: memcached # config: /etc/sysconfig/memcached . /etc/rc.d/init.d/functions ## Default variables PORT="11211" USER="nobody" MAXCONN="1024" CACHESIZE="64" RETVAL=0 prog="/usr/local/memcached/bin/memcached" desc="Distributed memory caching" lockfile="/var/lock/subsys/memcached" start() { echo -n $"Starting $desc (memcached): " daemon $prog -d -p $PORT -u $USER -c $MAXCONN -m $CACHESIZE RETVAL=$? [ $RETVAL -eq 0 ] && success && touch $lockfile || failure echo return $RETVAL } stop() { echo -n $"Shutting down $desc (memcached): " killproc $prog RETVAL=$? [ $RETVAL -eq 0 ] && success && rm -f $lockfile || failure echo return $RETVAL } restart() { stop start } reload() { echo -n $"Reloading $desc ($prog): " killproc $prog -HUP RETVAL=$? [ $RETVAL -eq 0 ] && success || failure echo return $RETVAL } case "$1" in start) start ;; stop) stop ;; restart) restart ;; condrestart) [ -e $lockfile ] && restart RETVAL=$? ;; reload) reload ;; status) status $prog RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|condrestart|status}" RETVAL=1 esac exit $RETVAL
授權并啟動服務
[root@scholar ~]# chmod +x /etc/init.d/memcached [root@scholar ~]# chkconfig --add memcached [root@scholar ~]# chkconfig memcached on [root@scholar ~]# service memcached start #兩個memcached節點都執行以上操作
tomcat配置
將所需jar包放入各tomcat節點的tomcat安裝目錄下的lib目錄中
[root@node1 ~]# cd msm/ [root@node1 msm]# ls javolution-5.4.3.1.jar msm-javolution-serializer-1.8.1.jar memcached-session-manager-1.8.1.jar spymemcached-2.10.2.jar memcached-session-manager-tc7-1.8.1.jar [root@node1 msm]# cp * /usr/local/tomcat/lib/ #各tomcat節點都需執行以上操作
[root@node1 msm]# vim /usr/local/tomcat/conf/server.xml <?xml version='1.0' encoding='utf-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.startup.VersionLoggerListener" /> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" /> <GlobalNamingResources> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" </GlobalNamingResources> <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.LockOutRealm"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="www.scholar.com" appBase="webapps" unpackWARs="true" autoDeploy="true"> <Context path="/test" docBase="/usr/local/tomcat/webapps/test/" reloadable="true"> <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager" memcachedNodes="n1:172.16.10.126:11211,n2:172.16.10.212:11211" failoverNodes="n1" requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$" transcoderFactoryClass="de.javakaffee.web.msm.serializer.javolution.Javolu tionTranscoderFactory" /> </Context> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="scholar_access_log." suffix=".txt" pattern="%h %l %u %t "%r" %s %b" /> </Host> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="scholar_access_log." suffix=".txt" pattern="%h %l %u %t "%r" %s %b" /> </Host> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t "%r" %s %b" /> </Host> </Engine> </Service> </Server>
將配置文件同步至另一節點
[root@node1 msm]# scp /usr/local/tomcat/conf/server.xml node2:/usr/local/tomcat/conf/ [root@node1 msm]# service tomcat restart ssh node2 'service tomcat restart'
訪問測試
由此可見,Session共享已實現,下面我們模擬TomcatB節點故障,看一下Session是否會發生改變
[root@node2 msm]# service tomcat stop
雖然因為TomcatB故障,導致用戶請求被調度到了TomcatA節點上,但Session ID并未發生改變,即Session集群內的所有節點都保存有全局的Session信息,很好的實現了用戶訪問的不中斷
當n2(memcached)節點發生故障,Session信息會不會轉移到其他memcached節點呢,我們來試一下
[root@scholar ~]# service memcached stop
Session已轉移到n1上,而且Session ID并未發生改變,至此,Tomcat基于MSM+Memcached實現Session共享目的已實現
The end
Tomcat基于MSM+Memcached實現Session共享實驗就說到這里了,實驗過程中遇到問題可留言交流。以上僅為個人學習整理,如有錯漏,大神勿噴~~~
原創文章,作者:書生,如若轉載,請注明出處:http://www.www58058.com/5984
為什么我設置后,2個tomcat顯示的sessionid不同?
Session ID A14E6BC4D742B22CEBDFB1D46A85A5A1-n1
Session ID 578EFEAAEEEE8B85812498547E0FE283-n1
javolution-5.4.3.1.jar msm-javolution-serializer-1.8.1.jar
memcached-session-manager-1.8.1.jar spymemcached-2.10.2.jar
memcached-session-manager-tc7-1.8.1.jar
這幾個包的版本有要求嗎? 我都用了最新版本的。
@bun:與msm相關的版本要一致,也要與tomcat版本一致
@書生:memcached-session-manager-1.8.3.jar
memcached-session-manager-tc7-1.8.3.jar
spymemcached-2.11.1.jar
javolution-5.4.3.1.jar
msm-javolution-serializer-1.8.3.jar
這個版本組合有問題嗎?
我發現通過不同tomcat實例直接訪問的話,session是一致的。
但是經過nginx做了負載均衡以后,每次刷新的時候,每個實例都得到不同的session值。
這個可能是哪里配置的問題?
@bun:no-sticky 模式
@bun:
@bun:你在從頭捋一遍 或者看看視頻看哪里有沒有遺漏 如果按以上配置一般不會出現問題 刷新時間間隔太長或者清除了本地緩存
@書生:tomcat實例訪問地址是 http://1.2.3.4:8080/test/session.jsp 和 http://1.2.3.4:18080/test/session.jsp
nginx的配置是
upstream tomcat {
server 12.3.4:8080;
server 1.2.3.4:18080;
}
http://www.xxx.com
proxy_pass http://tomcat/test/;
用 http://www.xxx.com/session.jsp 可以訪問
但是這樣的話,session就每次刷新都變
修改為
http://www.xxx.com
proxy_pass http://tomcat/;
用http://www.xxx.com/test/session.jsp 可以訪問,session也不變了。
但是路徑都了 /test/
如何解決前者的配置問題?
@bun:我為何看不懂你nginx的配置
@書生:proxy_pass http://tomcat/ 和
proxy_pass http://tomcat/test 的區別
@書生:頁面地址,在tocamcat是帶路徑的
http://1.2.3.4:8080/test/session.jsp
但是nginx是代理頂級域名 http://www.xxx.com/session.jsp
所以我用了 proxy_pass http://tomcat/test
這樣配置,導致通過nginx訪問的時候,session不停變化
proxy_pass http://tomcat/ 的話,就正常,但是訪問地址多了test路徑
http://www.xxx.com/test/session.jsp
@bun:在意這個干啥 如果你非要糾結 回顧下nginx的知識就明白了 能自己解決盡量自己解決
書生出品,必屬精品,現在才置頂是我的問題~