1 簡介
本章我們來介紹Centos6.5基于SSL密碼認證部署ELK(Elasticsearch 1.4.4+Logstash 1.4.2+kibana3),同時為大家介紹如何集合如上組件來收集日志,本章的日志收集主要為大家介紹SYSTEM日志收集.
集中化日志收集主要應用場景是在同一個窗口臨時性或永久性鑒定分析系統,應用等各類日志,對用戶提供極大便利,同時也為用戶提供一定自主性展示方式
2 本文目標
為大家介紹用logstash收集多目標主機syslogs日志,同時用kibana來分析展示收集到的日志
2.1 四大組件介紹
Logstash: logstash server端用來錄入日志
Elasticsearch: 存儲各類日志
Kibana: web化接口用作查尋和可視化日志
Logstash Forwarder: logstash client端用來通過lumberjack 網絡協議發送日志到logstash server
我們將安裝前三個組件到一臺服務器上,這臺機器將作為我們的logstash Server. Logstash Forwarder 將安裝在所有需要被收集日志的服務器,所有日志將被發送給Logstash Server.
2.2 基本概念
NRT: Near RealTime(NRT)時時分析系統,延遲在1秒內;
Cluster: 集群的通過name作為唯一標識,默認elasticsearch;
Node: part of cluster,stores data,a single cluster can have many nodes as want.if no elasticsearch nodes running on your network,starting a single node will be default form a new single-node cluster named elasticsearch
Index: 索引必須小寫,in a single cluster,you can define as many indeses as u want.
Type: one index, u can define one or more types.
Document: 最小被索引單位,例如一個文檔為單個用戶準備,另外一個為單產品介紹準備,還有一個是為單據準備。以json的方式切割。Index/type可存儲多個documents.
Shards & replicas: index can store a billion documents taking up 1TB of disk space, single node may be not fit, and may bo too slow to serve search requests from a single node alone. To solve this problem, Elasticsearch provides subdivide the indes into multiple pieces called shards. When create an index, we can simple define the num of shards that we want.
Sharding two primary reasons:
l Horizontally split/scale content volume(方便縱向切割或橫向擴展)
l Allow distribute distribute and parallelize operation shards (允許并行或分布式操作碎片)each index in Elasticsearch is allocated 5 primary shards and 1 replica which means that if u’ve at least two nodes in cluster, u index will have 5 primary shards and another 5 replica shards(1 complete replica)for total of 10 shards per index.
3 部署環境
3.1 前期環境準備
ELK硬件測試環境 |
||||||
HostName |
InnerIp |
OuterIp |
HardWare |
System |
Version |
Role |
AppS2 |
192.168.1.38 |
\ |
RAM:1GB |
CentOS release 6.5 (Final) |
ElasticSearch:1.4.2 |
ELK Server |
AppS3 |
192.168.1.39 |
\ |
0.3.1 |
Logstash Forwarder |
||
Manager |
192.168.1.40 |
\ |
ansible 1.8.2 |
AnsibleManager |
3.2 Server環境配置
3.2.1 Install Java 7
ELK環境基于JAVA 7環境運行,安裝命令如下
# yum install java-1.7.0-openjdk -y
3.2.2 Install ElasticSearch
//import the ElasticSearch GPG key into rpm
# rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch
//create new yum repository file for ElasticSearch
# vi /etc/yum.repos.d/elasticsearch.repo
//添加如下內容到elasticsearch.repo
[elasticsearch-1.4] name=Elasticsearch repository for 1.4.x packages baseurl=http://packages.elasticsearch.org/elasticsearch/1.4/centos gpgcheck=1 gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch enabled=1 |
//安裝elasticsearch
# yum install elasticsearch-1.4.1 –y //編輯/etc/elasticsearch/elasticsearch.yml script.disable_dynamic: true //增加行 network.host: localhost //取消注釋 防止外部通過HTTP API訪問Elasticsearch實例隨意讀取甚至shutdown Elasticsearch Clustaer discovery.zen.ping.multicast.enabled: false //取消注釋 禁用廣播
3.2.3 start Elasticsearch
# service elasticsearch restart
//加入到開機啟動項
# /sbin/chkconfig --add elasticsearch
3.2.4 Install Kibana
# cd /data/software; curl -O https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz # tar -xvf kibana-3.0.1.tar.gz # vim kibana-3.0.1/config.js //修改9200端口號為80
elasticsearch: "http://"+window.location.hostname+":80",
//創建nginx下的kibana目錄
# mkdir -p /usr/share/nginx/kibana3 # cp -R * /usr/share/nginx/kibana3/
3.2.5 Install Logstash
Logstash提供了yum安裝方式,
# vim /etc/yum.repos.d/logstash.repo
//增加如下配置
[logstash-1.4] name=logstash repository for 1.4.x packages baseurl=http://packages.elasticsearch.org/logstash/1.4/centos gpgcheck=1 gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch enabled=1 |
安裝
# yum -y install logstash-1.4.
3.2.6 Install Nginx
# yum install nginx
//kibana默認是使用Elasticsearch的9200端口,但用戶可以有權限直接訪問Elasticsearch,所以我們通過web Serverr 80端口代替訪問9200端口,Kibana也提供了關于nginx的配置文件供大家直接下載使用.
# curl -OL https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf
//nginx.conf配置
# cat nginx.conf # # Nginx proxy for Elasticsearch + Kibana # # In this setup, we are password protecting the saving of dashboards. You may # wish to extend the password protection to all paths. # # Even though these paths are being called as the result of an ajax request, the # browser will prompt for a username/password on the first request # # If you use this, you'll want to point config.js at http://FQDN:80/ instead of # http://FQDN:9200 # server { listen *:80 ; server_name kibana2.ihuilian.com.; access_log /var/log/nginx/kibana2.access.log; location / { root /usr/share/nginx/kibana3; index index.html index.htm; } location ~ ^/_aliases$ { proxy_pass http://127.0.0.1:9200; proxy_read_timeout 90; } location ~ ^/.*/_aliases$ { proxy_pass http://127.0.0.1:9200; proxy_read_timeout 90; } location ~ ^/_nodes$ { proxy_pass http://127.0.0.1:9200; proxy_read_timeout 90; } location ~ ^/.*/_search$ { proxy_pass http://127.0.0.1:9200; proxy_read_timeout 90; } location ~ ^/.*/_mapping { proxy_pass http://127.0.0.1:9200; proxy_read_timeout 90; } # Password protected end points location ~ ^/kibana-int/dashboard/.*$ { proxy_pass http://127.0.0.1:9200; proxy_read_timeout 90; limit_except GET { proxy_pass http://127.0.0.1:9200; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/conf.d/kibana2.htpasswd; } } location ~ ^/kibana-int/temp.*$ { proxy_pass http://127.0.0.1:9200; proxy_read_timeout 90; limit_except GET { proxy_pass http://127.0.0.1:9200; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/conf.d/kibana2.htpasswd; } } } |
//保存退出后
# cp nginx.conf /etc/nginx/conf.d/default.conf
//安裝apache2-utils用htpasswd來生成用戶名和密碼對:
# yum install httpd-tools-2.2.15 –y
//生成用戶名密碼
# htpasswd -c /etc/nginx/conf.d/kibana2.htpasswd user
//啟動Nginx
# service nginx restart
//添加開機啟動
# chkconfig nginx on
3.2.7 SSL認證
如上文所述,安全起見,我們elasticsearch采用web方式訪問,通過ssl認證的方式提高訪問安全性。
# vim /etc/pki/tls/openssl.cnf
//[v3_ca]下添加如下配置
subjectAltName=IP: 192.168.1.38
生成ssl認證文件
# cd /etc/pki/tls # openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Generating a 2048 bit RSA private key ……………………………….+++ …………………..+++ writing new private key to 'private/logstash-forwarder.key' —– |
3.2.8 配置logstash
Logstach配置文件是jason格式,配置文件在/etc/logstash/conf.d下,配置文件主要包括三部分:inputs,filters,outputs:
先創建input文件 01-lumberjack-input.conf采用lumberjack input 協議logstash forwarder使用.
Input配置如下內容:
# vim /etc/logstash/conf.d/01-lumberjack-input.conf
input { lumberjack { //定義采用lumberjack協議來收集日志 port => 5000 //定義使用5000端口 type => "logs" ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } }
|
Filter配置如下:
# vim 10-syslog.conf
#//如下過濾器可以收集到有syslog標簽的日志,并用grok來解析日志使之更結構化和可查詢 filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_progr am}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } } |
Output配置:
# vim /etc/logstash/conf.d/30-lumberjack-output.conf
#//這個配置是基于日志存儲在elasticsearch方式 ,通過這種方式 結合下面的規則logstash同時也可以收集不匹配規則的日志,只是這些日志不會被結構化 output { elasticsearch { host => localhost } stdout { codec => rubydebug } } |
啟動logstash:
# service logstash restart
3.3 Client環境配置
3.3.1 安裝Logstash Forwarder
//把server SSL認證文件發送到ship服務器
官網下載 https://www.elastic.co/downloads/logstash
logstash-forwarder-0.4.0-1.x86_64.rpm
//通過如下命令安裝
# rpm -ihv logstash-forwarder-0.4.0-1.x86_64.rpm
//添加logstash Forwarder初始化腳本
# cd /etc/init.d/; sudo curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
# chmod +x logstash-forwarder
//init腳本依賴于配置文件/etc/sysconfig/logstash-forwarder
# curl -o /etc/sysconfig/logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_sysconfig
//編輯并保存
# vim /etc/sysconfig/logstash-forwarder
//復制SSL認證文件到對應目錄下
# cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/
3.3.2 配置Logstash Forwarder
//編輯并保存
//ship將連接logstash server的5000端口
# vim /etc/logstash-forwarder
{ "network": { "servers": [ "192.168.1.38:5000" ], "timeout": 15, "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt" }, "files": [ { "paths": [ "/var/log/messages", "/var/log/secure" ], "fields": { "type": "syslog" } } ] } |
//啟動logstash-forwarder
# service logstash-forwarder start
//添加開機啟動
# chkconfig --add logstash-forwarder
//其它任何想收集日志的服務器均按如上配置即可
3.4 連接kibana
//當我們配置完如上的后,就可以收集所有希望收集的日志信息,Kibana可以提供一個web api友好接口供我們使用
//在瀏覽器雖輸入kibana2.ihuilian.com(按你的配置輸入)或ip來訪問logstash server。我們最先訪問到的是kibana welcome page.
//點擊Logstash dashboard進行預設置儀表盤,我們將看到類似如下的柱狀圖包括日志事件,日志信息(如果沒有看到這些信息那一定是四個組件的配置有問題,請檢查)
//進行下來的練習
l Search for “root “ to see if anyone is trying to log into your servers sa root
l Search for a particular hostname
貌似只支持全量匹配
l Change the time frame by selecting an area on the histogram on from the menu above
l Click on masaages below the histogram to see how the data is being filtered
4 Kibana使用說明
4.1 控制面板設置
4.2 自動刷新
In fact, you can add any exported dashboard to that directory and access it as http://YOUR-HOST -HERE/index.html#dashboard/file/YOUR-DASHBOARD.json. Neat trick eh?
http://kibana.ihuilian.com/#/dashboard/file/default.json
5 qa
信息收集慢
沒有找到文件的匹配規則
5.1 添加新ship失敗,一直無法顯示
a) 查看日志 無異常
b) 確認SSH認證文件 正常
c) # service logstash-forwarder restart 正常-(restart失敗但返回正常,其實是有問題我沒有發現,只相信系統最原始的命令,第三方腳本經常會有不同程度的問題)
d) 在server端restart logstash,elastashsearch,kibana,nginx均無法發現主機
e) 全新部署ship環境,每步均進行詳細確認
f) 發現logstash-forwarder腳本問題,修改后正常添加新主機
大日志是100條的方式逐漸累加
6 監控nginx日志
//定義Nginx日志格式
log_format logstash '$http_host $remote_addr [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_time $upstream_response_time'; access_log /var/log/nginx/AppM.access.log logstash; |
//修改logstash-forwarder
# vim /etc/logstash-forwarder
{ "network": { "servers": [ "192.168.1.38:5000" ], "timeout": 15, "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt" }, "files": [ { "paths": [ "/var/log/messages*", "/var/log/secure*" ], "fields": { "type": "syslog" } },{ "paths": [ "/var/log/nginx/AppM.access.log*" ], "fields": { "type": "nginx-access" } } ] } |
重啟logstash-forwarder生效
7 參考文檔:
https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-logs-on-centos-6
http://www.wklken.me/posts/2015/04/26/elk-for-nginx-log.html
http://www.cnblogs.com/yjf512/p/4199105.html
http://www.tuicool.com/articles/UnUzimJ
http://www.learnes.net/getting_started/README.html
http://bigbo.github.io/pages/2015/02/28/elasticsearch_hadoop/
https://github.com/lmenezes/elasticsearch-kopf
https://github.com/chenryn/kibana-guide-cn/blob/master/v4/dashboard.md
http://kibana.logstash.es/content/
原創文章,作者:stanley,如若轉載,請注明出處:http://www.www58058.com/5469
后面來幾個實戰案例更好
親測可用 不過美中不足少了redis的安裝 對新手可能會造成困擾