Rapid deployment of elk using docker
1. Introduction of Software & Tools
ELK is the abbreviation of three open source software, namely, Elastic search, Logstash and Kibana. They are all open source software. A new FileBeat is added, which is a lightweight log collection and processing tool (Agent). Filebeat takes up less resources and is suitable for transferring logs to Logstash after searching for logs on various servers. Officials also recommend this tool.
- Elastic search is an open source distributed search engine, which provides three functions: collecting, analyzing and storing data. Its features include: distributed, zero configuration, automatic discovery, automatic index fragmentation, index copy mechanism, restful style interface, multiple data sources, automatic search load, etc.
- Logstash is mainly used to collect, analyze and filter logs, supporting a large number of data acquisition methods. The general working mode is c/s architecture. client side is installed on the host which needs to collect logs. server side is responsible for filtering and modifying the logs received from each node and sending them to elastic search.
- Kibana is also an open source and free tool. Kibana can provide log analysis friendly Web interface for Logstash and Elastic Search, which can help aggregate, analyze and search important data logs.
Filebeat belongs to Beats. Beats currently contains four tools:
- Packetbeat (gathering network traffic data)
- Topbeat (gathering system, process and file system level CPU and memory usage data)
- Filebeat
- Winlogbeat (Collect Windows Event Log Data)
2. Pull ELK Integrated Mirror
ELK Mirror Address : https://hub.docker.com/r/sebp...
[root@localhost /]# docker pull sebp/elk:740
Note: 740 is the ELK version number. The version number here refers to the version number of the ELK integrated image.
3. Start ELK without explanation
[root@localhost /]# echo "vm.max_map_count=262144" > /etc/sysctl.conf [root@localhost /]# sysctl -p [root@localhost /]# docker run -dit --name elk \ -p 5601:5601 \ -p 9200:9200 \ -p 5044:5044 \ -v /opt/elk-data:/var/lib/elasticsearch \ -v /etc/localtime:/etc/localtime \ sebp/elk:660
Note: - p specifies the mapping port, 5601 kibana access, 9200es port, 5044 logstash collection log port; - v specifies the ES data directory.
4. Configure ELK (all of the following works in containers)
Enter docker container
[root@localhost /]# docker exec -it elk /bin/bash /etc/logstash/ ## logstash configuration file path /etc/elasticsearch/ ## es configuration file path /var/log/ ## Log path
- Configure Logstash
[root@localhost /]# vim /etc/logstash/conf.d/02-beats-input.conf # Data input configuration: port - > port number; codec - > input format. Let's take logback as an example. input { tcp { port => 5044 codec=>json_lines } } # Data output configuration: hosts - > host set; index - > index name you will create. Here is the case of es. output { elasticsearch { hosts => ["127.0.0.1:9200"] index => "%{[appName]}-%{+YYYY.MM.dd}" } }
Note: This configuration file can be created in the / etc/logstash/conf.d / directory, or can be directly modified to see personal needs.
- Configure kibana Chinese
[root@localhost /]# vim /opt/kibana/config/kibana.yml
Note: After opening the file, add i18n.locale: "zh-CN" to the file and restart the service.
- Common service operation commands
[root@localhost /]# service logstash start/restart/stop/status [root@localhost /]# service elasticsearch start/restart/stop/status [root@localhost /]# service kibana start/restart/stop/status
Note: Above is the start/restart/stop/status of each service. If the service is large, the operation will be slow. Do not repeat the command.
5. Simple use of Kibana
Reference Kibana User Manual Chinese document, Kibana is the English version of the demonstration.
6. Relevant Reference Documents
Centos Docker Installation - Newbie Course
7. Integration into Spring Boot (LogBack)
<appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>xxx.xxx.xxx.xxx:xxx</destination> <includeCallerData>true</includeCallerData> <encoder class="net.logstash.logback.encoder.LogstashEncoder"> <includeCallerData>true</includeCallerData> <!-- Create an index based on the application name--> <customFields>{"appName":"appName"}</customFields> </encoder> </appender>