CentOS6 ELK實現

1 簡介

我們來介紹Centos6.5基于SSL密碼認證部署ELK(Elasticsearch 1.4.4+Logstash 1.4.2+kibana3),同時為大家介紹如何集合如上組件來收集日志,本章的日志收集主要為大家介紹SYSTEM日志收集.

集中化日志收集主要應用場景是在同一個窗口臨時性或永久性鑒定分析系統,應用等各類日志,對用戶提供極大便利,同時也為用戶提供一定自主性展示方式

2 本文目標

為大家介紹用logstash收集多目標主機syslogs日志,同時用kibana來分析展示收集到的日志

2.1 四大組件介紹

Logstash: logstash server端用來錄入日志

Elasticsearch: 存儲各類日志

Kibana: web化接口用作查尋和可視化日志

Logstash Forwarder: logstash client端用來通過lumberjack 網絡協議發送日志到logstash server

我們將安裝前三個組件到一臺服務器上,這臺機器將作為我們的logstash Server. Logstash Forwarder 將安裝在所有需要被收集日志的服務器,所有日志將被發送給Logstash Server.

2.2 基本概念

NRT: Near RealTime(NRT)時時分析系統,延遲在1秒內;

Cluster: 集群的通過name作為唯一標識,默認elasticsearch;

Node: part of cluster,stores data,a single cluster can have many nodes as want.if no elasticsearch nodes running on your network,starting a single node will be default form a new single-node cluster named elasticsearch

Index: 索引必須小寫,in a single cluster,you can define as many indeses as u want.

Type: one index, u can define one or more types.

Document: 最小被索引單位,例如一個文檔為單個用戶準備,另外一個為單產品介紹準備,還有一個是為單據準備。以json的方式切割。Index/type可存儲多個documents.

Shards & replicas: index can store a billion documents taking up 1TB of disk space, single node may be not fit, and may bo too slow to serve search requests from a single node alone. To solve this problem, Elasticsearch provides subdivide the indes into multiple pieces called shards. When create an index, we can simple define the num of shards that we want.

Sharding two primary reasons:

l Horizontally split/scale content volume(方便縱向切割或橫向擴展)

l Allow distribute distribute and parallelize operation shards (允許并行或分布式操作碎片)each index in Elasticsearch is allocated 5 primary shards and 1 replica which means that if u’ve at least two nodes in cluster, u index will have 5 primary shards and another 5 replica shards1 complete replicafor total of 10 shards per index.

3 部署環境

3.1 前期環境準備

ELK硬件測試環境

HostName

InnerIp

OuterIp

HardWare

System

Version

Role

AppS2

192.168.1.38

\

RAM:1GB
CPU:1

CentOS release 6.5 (Final)

ElasticSearch:1.4.2
LogStash: 1.4.2
Kibana: 3.0.1

ELK Server

AppS3

192.168.1.39

\

0.3.1

Logstash Forwarder

Manager

192.168.1.40

\

ansible 1.8.2

AnsibleManager

3.2 Server環境配置

3.2.1 Install Java 7

ELK環境基于JAVA 7環境運行,安裝命令如下

# yum install java-1.7.0-openjdk -y

3.2.2 Install ElasticSearch

//import the ElasticSearch GPG key into rpm

# rpm --import 
http://packages.elasticsearch.org/GPG-KEY-elasticsearch

//create new yum repository file for ElasticSearch

# vi /etc/yum.repos.d/elasticsearch.repo

//添加如下內容到elasticsearch.repo

[elasticsearch-1.4]
name=Elasticsearch repository for 1.4.x   packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

//安裝elasticsearch

# yum install elasticsearch-1.4.1 –y
//編輯/etc/elasticsearch/elasticsearch.yml
script.disable_dynamic: true  //增加行
network.host: localhost  //取消注釋 防止外部通過HTTP API訪問Elasticsearch實例隨意讀取甚至shutdown Elasticsearch Clustaer
discovery.zen.ping.multicast.enabled: false //取消注釋 禁用廣播

3.2.3 start Elasticsearch

# service elasticsearch restart

//加入到開機啟動項

# /sbin/chkconfig --add elasticsearch

3.2.4 Install Kibana

# cd /data/software; curl -O https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz
# tar -xvf kibana-3.0.1.tar.gz
# vim kibana-3.0.1/config.js  //修改9200端口號為80

elasticsearch: “http://”+window.location.hostname+”:80″,

//創建nginx下的kibana目錄

# mkdir -p /usr/share/nginx/kibana3
# cp -R * /usr/share/nginx/kibana3/

3.2.5 Install Logstash

Logstash提供了yum安裝方式,

# vim /etc/yum.repos.d/logstash.repo

//增加如下配置

[logstash-1.4]
name=logstash repository for 1.4.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

安裝

# yum -y install logstash-1.4.

3.2.6 Install Nginx

# yum install nginx

//kibana默認是使用Elasticsearch9200端口,但用戶可以有權限直接訪問Elasticsearch,所以我們通過web Serverr 80端口代替訪問9200端口,Kibana也提供了關于nginx的配置文件供大家直接下載使用.

# curl -OL https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf

//nginx.conf配置

# cat   nginx.conf 
#
# Nginx   proxy for Elasticsearch + Kibana
#
# In this   setup, we are password protecting the saving of dashboards. You may
# wish to   extend the password protection to all paths.
#
# Even   though these paths are being called as the result of an ajax request, the
# browser   will prompt for a username/password on the first request
#
# If you   use this, you'll want to point config.js at http://FQDN:80/ instead of
#   http://FQDN:9200
#
server {
  listen                *:80 ;
 
  server_name           kibana2.ihuilian.com.;
  access_log              /var/log/nginx/kibana2.access.log;
 
  location / {
    root    /usr/share/nginx/kibana3;
    index    index.html  index.htm;
  }
 
  location ~ ^/_aliases$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/.*/_aliases$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/_nodes$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/.*/_search$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/.*/_mapping {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
 
  # Password protected end points
  location ~ ^/kibana-int/dashboard/.*$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
    limit_except GET {
      proxy_pass http://127.0.0.1:9200;
      auth_basic "Restricted";
      auth_basic_user_file   /etc/nginx/conf.d/kibana2.htpasswd;
    }
  }
  location ~ ^/kibana-int/temp.*$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
    limit_except GET {
      proxy_pass http://127.0.0.1:9200;
      auth_basic "Restricted";
      auth_basic_user_file   /etc/nginx/conf.d/kibana2.htpasswd;
    }
  }
}

//保存退出后

# cp nginx.conf /etc/nginx/conf.d/default.conf

//安裝apache2-utilshtpasswd來生成用戶名和密碼對:

# yum install httpd-tools-2.2.15 –y

//生成用戶名密碼

# htpasswd -c /etc/nginx/conf.d/kibana2.htpasswd user

//啟動Nginx

# service nginx restart

//添加開機啟動

# chkconfig nginx on

3.2.7 SSL認證

如上文所述,安全起見,我們elasticsearch采用web方式訪問,通過ssl認證的方式提高訪問安全性。

# vim /etc/pki/tls/openssl.cnf

//[v3_ca]下添加如下配置

subjectAltName=IP: 192.168.1.38

生成ssl認證文件

# cd /etc/pki/tls
# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

Generating a 2048 bit RSA private key

……………………………….+++

…………………..+++

writing new private key to ‘private/logstash-forwarder.key’

—–

3.2.8 配置logstash

Logstach配置文件是jason格式,配置文件在/etc/logstash/conf.d,配置文件主要包括三部分:inputs,filters,outputs:

先創建input文件 01-lumberjack-input.conf采用lumberjack input 協議logstash forwarder使用.

Input配置如下內容:

# vim /etc/logstash/conf.d/01-lumberjack-input.conf

input {
  lumberjack {       //定義采用lumberjack協議來收集日志
    port => 5000    //定義使用5000端口
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key =>   "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

Filter配置如下:

# vim 10-syslog.conf

#//如下過濾器可以收集到有syslog標簽的日志,并用grok來解析日志使之更結構化和可查詢
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" =>   "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname}   %{DATA:syslog_progr
am}(?:\[%{POSINT:syslog_pid}\])?:   %{GREEDYDATA:syslog_message}" }
      add_field => [   "received_at", "%{@timestamp}" ]
      add_field => [ "received_from",   "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [   "syslog_timestamp", "MMM    d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

Output配置:

# vim /etc/logstash/conf.d/30-lumberjack-output.conf

#//這個配置是基于日志存儲在elasticsearch方式 ,通過這種方式 結合下面的規則logstash同時也可以收集不匹配規則的日志,只是這些日志不會被結構化
output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}

啟動logstash

# service logstash restart

3.3 Client環境配置

3.3.1 安裝Logstash Forwarder

//server SSL認證文件發送到ship服務器

官網下載 https://www.elastic.co/downloads/logstash

logstash-forwarder-0.4.0-1.x86_64.rpm

//通過如下命令安裝

# rpm -ihv logstash-forwarder-0.4.0-1.x86_64.rpm

//添加logstash Forwarder初始化腳本

# cd /etc/init.d/; sudo curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
# chmod +x logstash-forwarder

//init腳本依賴于配置文件/etc/sysconfig/logstash-forwarder

# curl -o /etc/sysconfig/logstash-forwarder 
http://logstashbook.com/code/4/logstash_forwarder_redhat_sysconfig

//編輯并保存

# vim /etc/sysconfig/logstash-forwarder

//復制SSL認證文件到對應目錄下

# cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

3.3.2 配置Logstash Forwarder

//編輯并保存

//ship將連接logstash server5000端口

# vim /etc/logstash-forwarder
{
    "network": {
      "servers": [ "192.168.1.38:5000" ],
      "timeout": 15,
      "ssl ca":   "/etc/pki/tls/certs/logstash-forwarder.crt"
    },
    "files": [
      {
        "paths": [
          "/var/log/messages",
          "/var/log/secure"
         ],
        "fields": { "type": "syslog" }
      }
     ]
}

//啟動logstash-forwarder

# service logstash-forwarder start

//添加開機啟動

# chkconfig --add logstash-forwarder

//其它任何想收集日志的服務器均按如上配置即可

3.4 連接kibana

//當我們配置完如上的后,就可以收集所有希望收集的日志信息,Kibana可以提供一個web api友好接口供我們使用

//在瀏覽器雖輸入kibana2.ihuilian.com(按你的配置輸入)ip來訪問logstash server。我們最先訪問到的是kibana welcome page.

//點擊Logstash dashboard進行預設置儀表盤,我們將看到類似如下的柱狀圖包括日志事件,日志信息(如果沒有看到這些信息那一定是四個組件的配置有問題,請檢查)

1.png

//進行下來的練習

l Search for “root “ to see if anyone is trying to log into your servers sa root

2.png

l Search for a particular hostname

3.png

貌似只支持全量匹配

l Change the time frame by selecting an area on the histogram on from the menu above

l Click on masaages below the histogram to see how the data is being filtered

4 Kibana使用說明

4.1 控制面板設置

4.png

CentOS6 ELK實現

CentOS6 ELK實現

CentOS6 ELK實現

4.2 自動刷新

8.png

In fact, you can add any exported dashboard to that directory and access it as http://YOUR-HOST -HERE/index.html#dashboard/file/YOUR-DASHBOARD.json. Neat trick eh?

http://kibana.ihuilian.com/#/dashboard/file/default.json

5 qa

信息收集慢

沒有找到文件的匹配規則

5.1 添加新ship失敗,一直無法顯示

a) 查看日志 無異常

b) 確認SSH認證文件 正常

c) # service logstash-forwarder restart 正常-(restart失敗但返回正常,其實是有問題我沒有發現,只相信系統最原始的命令,第三方腳本經常會有不同程度的問題)

d) serverrestart logstash,elastashsearch,kibana,nginx均無法發現主機

e) 全新部署ship環境,每步均進行詳細確認

f) 發現logstash-forwarder腳本問題,修改后正常添加新主機

大日志是100條的方式逐漸累加

9.png10.png

6 監控nginx日志

//定義Nginx日志格式

log_format logstash ‘$http_host $remote_addr [$time_local] “$request” $status $body_bytes_sent “$http_referer”

“$http_user_agent” $request_time $upstream_response_time’;

access_log /var/log/nginx/AppM.access.log logstash;

//修改logstash-forwarder

# vim /etc/logstash-forwarder

{
  "network": {
    "servers": [   "192.168.1.38:5000" ],
    "timeout": 15,
    "ssl ca":   "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/messages*",
        "/var/log/secure*"
       ],
      "fields": { "type":   "syslog" }
    },{
      "paths": [
          "/var/log/nginx/AppM.access.log*"
       ],
      "fields": { "type":   "nginx-access" }
    }
  ]
 
}

重啟logstash-forwarder生效

7 參考文檔:

https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-logs-on-centos-6

http://www.wklken.me/posts/2015/04/26/elk-for-nginx-log.html

http://www.cnblogs.com/yjf512/p/4199105.html

http://www.tuicool.com/articles/UnUzimJ

http://www.learnes.net/getting_started/README.html

http://bigbo.github.io/pages/2015/02/28/elasticsearch_hadoop/

https://github.com/lmenezes/elasticsearch-kopf

http://logstash.es/

https://github.com/chenryn/kibana-guide-cn/blob/master/v4/dashboard.md

http://kibana.logstash.es/content/


原創文章,作者:kang,如若轉載,請注明出處:http://www.www58058.com/79151

(0)
kangkang
上一篇 2017-05-17
下一篇 2017-05-17

相關推薦

  • php的serialize序列化和json性能測試

    最近需要對大數組做存儲,需要在serialize序列化和json之間做了選擇。因此需要做了性能測試。 在php5.2之前對數組存儲的時候,大都使用serialize系列化。php5.2之后,開始內置了 JSON 的支持。 在網上看到有些資料說:json_encode和json_decode比內置的serialize和unserialize…

    Linux干貨 2015-04-07
  • Linux磁盤與文件系統管理的一些命令

    fdisk fdisk命令用于觀察硬盤實體使用情況,也可對硬盤分區。它采用傳統的問答式界面,而非類似DOS fdisk的cfdisk互動式操作界面,因此在使用上較為不便,但功能卻絲毫不打折扣。 輸入m列出可以執行的命令 p:顯示磁盤分區表 n:new,新建分區 d:delete,刪除分區 t:更改系統類型 l:列出已知分區類型 w:保存并退出 q:不保存退出…

    Linux干貨 2017-04-23
  • 磁盤管理及shell腳本編程練習

    1、創建一個10G分區,并格式為ext4文件系統 a)?要求其block大小為2048,預留空間百分比為2,卷標為MYDATA,默認掛載屬性包含acl; ~]# mke2fs –t ext4 –b 2048 –m 2 –L MYDATA? /dev/sda3 b)?掛載至/data/madata目錄,要求掛載時禁止程序自動運行,且不更新文件的訪問時間戳; ~…

    2017-11-15
  • N22-第六周作業

    vim小小的總結 (本總結摘自馬哥網絡班22期課堂筆記) vim是一款交互式的全屏編輯器,是vi的升級版,在vi上增強了很多,如:多級撤銷,多窗口和多緩沖區,語法高亮,在線幫助等。 vim也是一種模式化的編輯器。分別是編輯模式也叫命令模式,輸入模式,末行模式。其中使用vim打開文件后默認的是編輯模式。 三種模式之間的轉換: 編輯模式—>輸入模式 &…

    Linux干貨 2016-11-21
  • 學習宣言

    如果自己都不愿意動,沒有人能幫助我成功!

    Linux干貨 2016-12-26
  • MySQL常用命令

    本文大綱 MySQL命令             (0%) 交互式CLI工具     服務端命令 mysqld服務器程序 數據類型 DDL語句                 &n…

    Linux干貨 2017-02-16
欧美性久久久久