Public | Automated Build

Last pushed: 2 months ago
Short Description
elasticsearch for docker (base from alpine linux)
Full Description

关于本镜像.

ELK官方image使用了debian做基础镜像, 体积较为臃肿, 因此便有了改造的想法,通过改造后,体积只有官方镜像的一半大. 使用轻量alpine linux 作为基础.

如何使用?

  1. 使用方法同官方镜像一样,但是官方默认配置只监听在本地端口,本镜像修改为监听所有.官方宣称是为了安全起见,如何取舍自己考虑,也可以挂载自定义配置文件替换默认配置.
    特别标注一下, 从5.0开始,传递变量不使用-Des, 而是使用-E 加上key/value的键值参数,如-Enode.name=test_node

for example:
拉取镜像:

docker pull qq58945591/elasticsearch

启动es:

docker run -d --name es -p 9200:9200 -p 9300:9300 qq58945591/elasticsearch -Enode.name=SERVER1 -Ecluster.name=MY_CLUSTER

ELK 组合

附带一个elk_example.yml, 可以快速建立一个elk日志收集套件,默认只对nginx与postfix进行处理,想要处理更多日志,也许需要自定义logstash的Dockerfile以及过滤配置.

elk_example.yml

version:  "2"
services:
  elasticsearch:
    image:  qq58945591/elasticsearch
    ports:
      - 127.0.0.1:9200:9200
      - 127.0.0.1:9300:9300
    volumes:
      - /opt/elasticsearch/data:/usr/share/elasticsearch/data
    user: elasticsearch
    restart:  always
    environment:
      - Enode.name=elk_node1
      - Ecluster.name=mycluster
    container_name: "elasticsearch"
    networks:
      - elk_network

  logstash:
    image:  qq58945591/logstash
    user: root
    restart:  always
    links:
      - redis:redis-server
      - elasticsearch:elasticsearch
    command: "-f /etc/logstash/conf.d/logstash.conf"
    container_name: "logstash"
    depends_on:
      - elasticsearch
      - redis
    ulimits:
      mem_limit: 1024m
    networks:
      - elk_network

  kibana:
    image:  qq58945591/kibana
    ports:
      - 127.0.0.1:5601:5601
    user: root
    restart:  always
    links:
      - elasticsearch:elasticsearch
#    environment:
#      - ELASTICSEARCH_URL=http://elasticsearch:9200
    container_name: "kibana"
    depends_on:
      - elasticsearch
    ulimits:
      mem_limit: 512m
    networks:
      - elk_network

  redis:
    image:  redis
    ports:
      - 127.0.0.1:6379:6379
    volumes:
      - /opt/redis:/data
    user: root
    restart:  always
    container_name: "redis-server"
    ulimits:
      mem_limit: 1024m
    networks:
      - elk_network

networks:
  elk_network:
     driver: bridge
     driver_opts:
       com.docker.network.enable_ipv6: "false"
     ipam:
       driver: default
       config:
         - subnet: 172.18.1.0/24
           gateway: 172.18.1.1

注意:

由于使用redis作为broker, 日志的收集流程是rsyslog(客户端节点) --> rsyslog(中央日志服务器) --> redis(broker缓冲队列) --> logstash(结构化处理) --> elaticsearch(存储) ,默认情况下rsyslog不支持往redis推送数据,需要重编译rsyslog,添加--enable-omhiredis参数,详情参考这里Redis Output Module
这里有一个实例可以参考Recipe: rsyslog + Redis + Logstash


注意:

在中央日志服务器修改/etc/rsyslog.conf

添加下面内容,开启输出到redis模块.

#Provides outpu to Redis
module(load="omhiredis")

增加一个输出模版.

template(name="json_lines" type="list" option.json="on") {
constant(value="{")
         constant(value="\"timestamp\":\"")      property(name="timereported" dateFormat="rfc3339")
         constant(value="\",\"message\":\"")     property(name="rawmsg-after-pri")
         constant(value="\",\"host\":\"")        property(name="hostname")
         constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
         constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
         constant(value="\",\"program\":\"")   property(name="syslogtag")
       constant(value="\"}")
     }

添加主队列设定

main_queue(
  queue.workerthreads="2"      # threads to work on the queue
  queue.dequeueBatchSize="200" # max number of messages to process at once
  queue.size="10000"           # max queue size
)

光标移动到rsyslog.conf最后,加入过滤条件然后传送给logstash处理,更多条件过滤规则清参考Syslog Filter Conditions

只传送包含发送状态的日志.

if $syslogtag contains 'postfix' and $rawmsg-after-pri contains 'status=' and not ($msg contains 'connect from' and $msg contains 'disconnect from' or $msg contains 'dsn-feeder-prod' or $msg contains 'root') then {

action (
        name="push_postfix_to_redis"
        server="127.0.0.1"
        serverport="6379"
        type="omhiredis"
        mode="queue"
        key="mail"
        template="json_lines"
)
}

只传送包含tag 为nginx以及不包含zabbix内容的web记录,

if $syslogtag contains 'nginx' and not ($msg contains "Zabbix" ) then {

action (
        name="push_nginx_to_redis"
        server="127.0.0.1"
        serverport="6379"
        type="omhiredis"
        mode="queue"
        key="nginx"
        template="json_lines"
)
}

使用docker-compose启动ELK

docker-compose -f /path/to/path/elk_example.yml up -d

检查容器运行状态

docker logs 容器名或ID -f

日志收集节点只需要设定将日志转发到中央服务器即可
邮件日志在节点客户端/etc/rsyslog.conf

mail.* @@rsyslog.server:514

nginx日志仅需要在nginx.conf或者单独server配置发送到中央日志服务器即可
注意ta标志一定要与远程日志服务器对应. 参考nginx官方文档

access_log syslog:server=rsyslog.server:514,facility=local6,tag=nginx,severity=info main;
Docker Pull Command
Owner
qq58945591
Source Repository