Home  >  Article  >  Operation and Maintenance  >  How to use Docker for application monitoring and log management

How to use Docker for application monitoring and log management

WBOY
WBOYOriginal
2023-11-07 16:58:541022browse

How to use Docker for application monitoring and log management

Docker has become an essential technology in modern applications, but using Docker for application monitoring and log management is a challenge. With the continuous enhancement of Docker network functions, such as Service Discovery and Load Balancing, we increasingly need a complete, stable, and efficient application monitoring system.

In this article, we will briefly introduce the use of Docker for application monitoring and log management and give specific code examples.

Using Prometheus for application monitoring

Prometheus is an open source, Pull model-based service monitoring and warning tool developed by SoundCloud. It is written in Go language and is widely used in microservice solutions and cloud environments. As a monitoring tool, it can monitor Docker's CPU, memory, network and disk, etc., and also supports multi-dimensional data switching, flexible query, alarm and visualization functions, allowing you to react quickly and do things quickly. Make decisions.

Another thing to note is that Prometheus needs to sample through Pull mode, that is, access the /metrics interface in the monitored application to obtain monitoring data. Therefore, when starting the monitored application image, you need to first configure the IP and port that can access Prometheus into the /metrics interface. Below is a simple Node.js application.

const express = require('express')
const app = express()

app.get('/', (req, res) => {
  res.send('Hello World!')
})

app.get('/metrics', (req, res) => {
  res.send(`
    # HELP api_calls_total Total API calls
    # TYPE api_calls_total counter
    api_calls_total 100
  `)
})

app.listen(3000, () => {
  console.log('Example app listening on port 3000!')
})

In this code, we return an api_calls_total monitoring indicator through the /metrics interface.

Next, download the Docker image of Prometheus on the official website and create a docker-compose.yml file, and in this file, we obtain the data of the Node.js application.

version: '3'
services:
  node:
    image: node:lts
    command: node index.js
    ports:
      - 3000:3000

  prometheus:
    image: prom/prometheus:v2.25.2
    volumes:
      - ./prometheus:/etc/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.retention.time=15d'
    ports:
      - 9090:9090

In the docker-compose.yml file, we define two services, one is the Node service that runs the Node.js application, and the other is the Prometheus service for monitoring. Among them, the port published by the Node service is port 3000. Through port mapping, the /metrics interface of the Node application can be accessed through the IP and 3000 port in docker-compose.yml. Prometheus can access the corresponding monitoring indicator data through port 9090.

Finally, in the prometheus.yml file, we need to define the data source to be obtained.

global:
  scrape_interval:     15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: 'node-exporter'
    static_configs:
    - targets: ['node:9100']

  - job_name: 'node-js-app'
    static_configs:
    - targets: ['node:3000']

In this file, we define the indicators of all Node.js applications to be collected, where the targets parameter is the IP address of the Node.js application and its corresponding port number. Here, we are using node and port 3000.

Finally, run the docker-compose up command to start the entire application and its monitoring service, and view the member indicators in Prometheus.

Use ElasticSearch and Logstash for log management

In Docker, application log data is distributed in different Docker containers. If you want to manage these logs in a centralized place, you can use ElasticSearch and Logstash in ELK to centrally manage the logs to make it easier to monitor and analyze computer resources.

Before starting, you need to download the Docker images of Logstash and ElasticSearch and create a docker-compose.yml file.

In this file, we define three services, among which bls is an API service used to simulate business logs. After each response, a log will be recorded to stdout and log files. The logstash service is built from the Docker image officially provided by Logstash and is used to collect, filter and transmit logs. The ElasticSearch service is used to store and retrieve logs.

version: '3'
services:
  bls:
    image: nginx:alpine
    volumes:
      - ./log:/var/log/nginx
      - ./public:/usr/share/nginx/html:ro
    ports:
      - "8000:80"
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"

  logstash:
    image: logstash:7.10.1
    volumes:
      - ./logstash/pipeline:/usr/share/logstash/pipeline
    environment:
      - "ES_HOST=elasticsearch"
    depends_on:
      - elasticsearch

  elasticsearch:
    image: elasticsearch:7.10.1
    environment:
      - "http.host=0.0.0.0"
      - "discovery.type=single-node"
    volumes:
      - ./elasticsearch:/usr/share/elasticsearch/data

In the configuration file, we map the path in the container to the host's log file system. At the same time, through the logging option, the volume size and quantity of the log are defined to limit the storage occupied by the log.

In the logstash of the configuration file, we define a new pipeline named nginx_pipeline.conf. This file is used to handle the collection, filtering and transmission of nginx logs. Similar to how ELK works, logstash will process the received logs based on different conditions and send them to the already created Elasticsearch cluster. In this configuration file, we define the following processing logic:

input {
  file {
    path => "/var/log/nginx/access.log"
  }
}

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
}

output {
  elasticsearch {
    hosts => [ "${ES_HOST}:9200" ]
    index => "nginx_log_index"
  }
}

In this configuration file, we define an input named file, which means that we want to read data from the local Log file. Next, we introduced a filter that uses the grok library to parse logs that match a specific template. Finally, we define the output, which transfers data to the address of the Elasticsearch cluster, while passing retrieval and reporting into the container via the environment variable ES_HOST.

In the end, after completing the entire ELK configuration as above, we will get an efficient log management system. Each log will be sent to a centralized place and integrated together, allowing for easy search. Filtering and visualization operations.

The above is the detailed content of How to use Docker for application monitoring and log management. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn