beats components of elastic are used

1.Filebeat

Filebeat is mainly used for forwarding and centralizing log data. Filebeat is installed on the server as a proxy to monitor the log files or locations you specify, collect log events, and forward them to Elastic Search or Logstash for indexing.

Support for sending various log files and logs from Liunx servers, Windows and Docker containers to Elastic Search

After configuring the log for automatically discovering the docker container, you can filter the log in kibana by host and service name.

When sending data to Logstash or Elastic search, Filebeat uses backpressure-sensitive protocols to cope with more data volumes. If Logstash is busy processing data, Filebeat is told to slow down reading. Once congestion is resolved, Filebeat will return to its original pace and continue to transmit data.

Use

1. Create filebeat.docker.yml file

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

processors:
- add_cloud_metadata: ~

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
  username: '${ELASTICSEARCH_USERNAME:}'
  password: '${ELASTICSEARCH_PASSWORD:}'

2. Running the following command will automatically create index, pattern and an index lifecycle policy in es. After generation, you can modify the default setting of policy first.

sudo docker run \
docker.elastic.co/beats/filebeat:7.3.0 \
setup -E setup.kibana.host=kibana:5601 \
-E output.elasticsearch.hosts=["elasticsearch:9200"]

3. Run the filebeat service

docker run -d \
  --name=filebeat \
  --user=root \
  --net=host \
  --volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \
  --volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
  --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
  docker.elastic.co/beats/filebeat:7.3.0 filebeat -e -strict.perms=false \
  -E output.elasticsearch.hosts=["elasticsearch:9200"]

2.Heartbeat

Mainly to detect whether the service or host is running or alive, Heartbeat can ping through ICMP, TCP and HTTP.

Use

1. Create heartbeat.docker.yml file

heartbeat.monitors:
- type: http
  schedule: '@every 5s'
  urls:
    - http://elasticsearch:9200
    - http://kibana:5601

- type: icmp
  schedule: '@every 5s'
  hosts:
    - elasticsearch
    - kibana

processors:
- add_cloud_metadata: ~

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
  username: '${ELASTICSEARCH_USERNAME:}'
  password: '${ELASTICSEARCH_PASSWORD:}'

2. Running the following command will automatically create index, pattern and an index lifecycle policy in es. After generation, you can modify the default setting of policy first.

sudo docker run \
docker.elastic.co/beats/heartbeat:7.3.0 \
setup -E setup.kibana.host=kibana:5601 \
-E output.elasticsearch.hosts=["elasticsearch:9200"]

3. Run heartbeat

sudo docker run -d \
  --name=heartbeat \
  --user=heartbeat \
  --net=host \
  --volume="$(pwd)/heartbeat.docker.yml:/usr/share/heartbeat/heartbeat.yml:ro" \
  docker.elastic.co/beats/heartbeat:7.3.0 \
  --strict.perms=false -e \
  -E output.elasticsearch.hosts=["elasticsearch:9200"]

3.Metricbeat

Collect indicators of operating systems, software, or services on a regular basis
Metricbeat supports a wide range of module s, such as docker, kafka, mysql, nginx, redis, zookeeper, and so on.

Use

1. Create metricbeat.docker.yml file

metricbeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    # Reload module configs as they change:
    reload.enabled: false

metricbeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

metricbeat.modules:
- module: docker
  metricsets:
    - "container"
    - "cpu"
    - "diskio"
    - "healthcheck"
    - "info"
    #- "image"
    - "memory"
    - "network"
  hosts: ["unix:///var/run/docker.sock"]
  period: 10s
  enabled: true

processors:
  - add_cloud_metadata: ~

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
  username: '${ELASTICSEARCH_USERNAME:}'
  password: '${ELASTICSEARCH_PASSWORD:}'

2. Running the following command will automatically create index, pattern and an index lifecycle policy in es. After generation, you can modify the default setting of policy first.

sudo docker run \
docker.elastic.co/beats/metricbeat:7.3.0 \
setup -E setup.kibana.host=kibana:5601 \
-E output.elasticsearch.hosts=["elasticsearch:9200"]

3. Run Metricbeat

sudo docker run -d \
  --name=metricbeat \
  --user=root \
  --net=host \
  --volume="$(pwd)/metricbeat.docker.yml:/usr/share/metricbeat/metricbeat.yml:ro" \
  --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
  --volume="/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro" \
  --volume="/proc:/hostfs/proc:ro" \
  --volume="/:/hostfs:ro" \
  docker.elastic.co/beats/metricbeat:7.3.0 metricbeat -e \
  -E output.elasticsearch.hosts=["elasticsearch:9200"]

4.Packetbeat

Packetbeat is a lightweight network packet analyzer. Packetbeat works by capturing network traffic between application servers and decoding application layer protocols (HTTP, MySQL, Redis, etc.).

1. You can view the traffic between the server and the server.
2. It can monitor the ranking of execution times and response time of SQL statements.

Use

1. Create packetbeat.docker.yml file

packetbeat.interfaces.device: any

packetbeat.flows:
  timeout: 30s
  period: 10s

packetbeat.protocols.dns:
  ports: [53]
  include_authorities: true
  include_additionals: true

packetbeat.protocols.http:
  ports: [80, 5601, 9200]

packetbeat.protocols.memcache:
  ports: [11211]

packetbeat.protocols.mysql:
  ports: [3306]


processors:
- add_cloud_metadata: ~

output.elasticsearch:
  hosts: ["http://elasticsearch:9200"]

Note: The default configuration has a large amount of data (a machine is close to 100G a day) and needs to be optimized for use.

2. Creating an index of packetbeat in es automatically creates an index lifecycle policy to manage the index. After that, you can modify the default setting of the policy first.

sudo docker run \
--cap-add=NET_ADMIN \
docker.elastic.co/beats/packetbeat:7.3.0 \
setup -E setup.kibana.host=elasticsearch:5601 \
-E output.elasticsearch.hosts=["elasticsearch:9200"]

3. The docker command starts the packetbeat service

sudo docker run -d \
  --name=packetbeat \
  --user=packetbeat \
  --volume="/data/docker/packetbeat/7.3.0/packetbeat.docker.yml:/usr/share/packetbeat/packetbeat.yml:ro" \
  --cap-add="NET_RAW" \
  --cap-add="NET_ADMIN" \
  --network=host \
  docker.elastic.co/beats/packetbeat:7.3.0 \
  --strict.perms=false -e \
  -E output.elasticsearch.hosts=["elasticsearch:9200"]

4. View the chart corresponding to packetbeat in DashBoard, kibana

5.Auditbeat

Auditbeat allows you to carefully monitor any file directory you are interested in on Linux, macOS, and Windows platforms. File changes are sent to Elastic search in real time. Each message contains encrypted hash information of metadata and file content for further analysis.

Simply specify the directory of files you want Auditbeat to monitor, and you're done.

audit is mainly used to record security information and to trace system security events.
audit records kernel information, including file read and write, permission changes, etc.

To use auditbeat, you need to stop the auditd (service auditd stop) that comes with the linux system first.
Otherwise fail to set audit PID. An audit process is already running (PID 9701)

Use

1. Create auditbeat.docker.yml configuration file

auditbeat.modules:

- module: auditd
  audit_rules: |
    -w /etc/passwd -p wa -k identity
    -a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access

- module: file_integrity
  paths:
    - /bin
    - /usr/bin
    - /sbin
    - /usr/sbin
    - /etc
processors:
- add_cloud_metadata: ~

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
  username: '${ELASTICSEARCH_USERNAME:}'
  password: '${ELASTICSEARCH_PASSWORD:}'

2. Creating an auditbeat index in es will automatically create an index lifecycle policy to manage the index. After the index is generated, you can modify the default setting of the policy first.

sudo docker run \
--cap-add="AUDIT_CONTROL" \
--cap-add="AUDIT_READ" \
--privileged=true \    
docker.elastic.co/beats/auditbeat:7.3.0 \
setup -E setup.kibana.host=kibana:5601 \
-E output.elasticsearch.hosts=["elasticsearch:9200"]

3. The docker command starts and mounts the custom configuration file

sudo docker run -d \
  --name=auditbeat \
  --user=root \
  --net=host \
  --privileged=true \
  --volume="/data/docker/auditbeat/7.3.0/auditbeat.docker.yml:/usr/share/auditbeat/auditbeat.yml:ro" \
  --cap-add="AUDIT_CONTROL" \
  --cap-add="AUDIT_READ" \
  --pid=host \
  docker.elastic.co/beats/auditbeat:7.3.0 -e \
  --strict.perms=false \
  -E output.elasticsearch.hosts=["elasticsearch:9200"]

4. View the Dashboard of the corresponding modules in kibana

6.Journalbeat (experimental function)

It belongs to a member of Beats and is dedicated to collecting journald logs.

journald

centos7 uses system d-journald as the log center library, rsyslog to persist the log, and logrotate to rotate the log files.

The system d-journald daemon provides an improved log management service that collects error messages from the kernel, the early stages of the boot process, standard output, system logs, and the daemon during boot and run. It writes these messages to a structured event log, which is not reserved between restarts by default.

journald is different from the traditional file storage method, which is binary storage. You need to view it in journalctl.

Use

1. Default configuration file

journalbeat.inputs:
- paths: []
  seek: cursor

processors:
- add_cloud_metadata: ~

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
  username: '${ELASTICSEARCH_USERNAME:}'
  password: '${ELASTICSEARCH_PASSWORD:}'

2. Use docker to generate index pattern and lifecycle policy. After generation, you can modify the default setting of policy first.

sudo docker run \
docker.elastic.co/beats/journalbeat:7.3.0 \
setup -E setup.kibana.host=kibana:5601 \
-E output.elasticsearch.hosts=["elasticsearch:9200"]

3. Start journlbeat

sudo docker run -d \
  --name=journalbeat \
  --user=root \
  --net=host \
  --volume="/var/log/journal:/var/log/journal" \
  --volume="/etc/machine-id:/etc/machine-id" \
  --volume="/run/systemd:/run/systemd" \
  --volume="/etc/hostname:/etc/hostname:ro" \
  docker.elastic.co/beats/journalbeat:7.3.0 journalbeat -e -strict.perms=false \
  -E output.elasticsearch.hosts=["elasticsearch:9200"]  

7.Functionbeat

Functionbeat is an Elastic Beat deployed in a Server-Free environment to collect events generated by cloud services and send them to Elastic search.

Version 7.3.0 supports deploying Functionbeat as an AWS Lambda service and responding to triggers defined for the following event sources:

  • 1.CloudWatch Logs
  • 2.Amazon Simple Queue Service (SQS)
  • 3.Kinesis

8.TopBeat

Mainly read system load and CPU and memory statistics, has been replaced by Metricbeat.

Keywords: Docker ElasticSearch sudo network

Added by sguy on Thu, 15 Aug 2019 11:17:23 +0300