


Docker has become an essential technology in modern applications, but using Docker for application monitoring and log management is a challenge. With the continuous enhancement of Docker network functions, such as Service Discovery and Load Balancing, we increasingly need a complete, stable, and efficient application monitoring system.
In this article, we will briefly introduce the use of Docker for application monitoring and log management and give specific code examples.
Using Prometheus for application monitoring
Prometheus is an open source, Pull model-based service monitoring and warning tool developed by SoundCloud. It is written in Go language and is widely used in microservice solutions and cloud environments. As a monitoring tool, it can monitor Docker's CPU, memory, network and disk, etc., and also supports multi-dimensional data switching, flexible query, alarm and visualization functions, allowing you to react quickly and do things quickly. Make decisions.
Another thing to note is that Prometheus needs to sample through Pull mode, that is, access the /metrics interface in the monitored application to obtain monitoring data. Therefore, when starting the monitored application image, you need to first configure the IP and port that can access Prometheus into the /metrics interface. Below is a simple Node.js application.
const express = require('express') const app = express() app.get('/', (req, res) => { res.send('Hello World!') }) app.get('/metrics', (req, res) => { res.send(` # HELP api_calls_total Total API calls # TYPE api_calls_total counter api_calls_total 100 `) }) app.listen(3000, () => { console.log('Example app listening on port 3000!') })
In this code, we return an api_calls_total monitoring indicator through the /metrics interface.
Next, download the Docker image of Prometheus on the official website and create a docker-compose.yml file, and in this file, we obtain the data of the Node.js application.
version: '3' services: node: image: node:lts command: node index.js ports: - 3000:3000 prometheus: image: prom/prometheus:v2.25.2 volumes: - ./prometheus:/etc/prometheus command: - '--config.file=/etc/prometheus/prometheus.yml' - '--storage.tsdb.retention.time=15d' ports: - 9090:9090
In the docker-compose.yml file, we define two services, one is the Node service that runs the Node.js application, and the other is the Prometheus service for monitoring. Among them, the port published by the Node service is port 3000. Through port mapping, the /metrics interface of the Node application can be accessed through the IP and 3000 port in docker-compose.yml. Prometheus can access the corresponding monitoring indicator data through port 9090.
Finally, in the prometheus.yml file, we need to define the data source to be obtained.
global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: 'node-exporter' static_configs: - targets: ['node:9100'] - job_name: 'node-js-app' static_configs: - targets: ['node:3000']
In this file, we define the indicators of all Node.js applications to be collected, where the targets parameter is the IP address of the Node.js application and its corresponding port number. Here, we are using node and port 3000.
Finally, run the docker-compose up command to start the entire application and its monitoring service, and view the member indicators in Prometheus.
Use ElasticSearch and Logstash for log management
In Docker, application log data is distributed in different Docker containers. If you want to manage these logs in a centralized place, you can use ElasticSearch and Logstash in ELK to centrally manage the logs to make it easier to monitor and analyze computer resources.
Before starting, you need to download the Docker images of Logstash and ElasticSearch and create a docker-compose.yml file.
In this file, we define three services, among which bls is an API service used to simulate business logs. After each response, a log will be recorded to stdout and log files. The logstash service is built from the Docker image officially provided by Logstash and is used to collect, filter and transmit logs. The ElasticSearch service is used to store and retrieve logs.
version: '3' services: bls: image: nginx:alpine volumes: - ./log:/var/log/nginx - ./public:/usr/share/nginx/html:ro ports: - "8000:80" logging: driver: "json-file" options: max-size: "10m" max-file: "10" logstash: image: logstash:7.10.1 volumes: - ./logstash/pipeline:/usr/share/logstash/pipeline environment: - "ES_HOST=elasticsearch" depends_on: - elasticsearch elasticsearch: image: elasticsearch:7.10.1 environment: - "http.host=0.0.0.0" - "discovery.type=single-node" volumes: - ./elasticsearch:/usr/share/elasticsearch/data
In the configuration file, we map the path in the container to the host's log file system. At the same time, through the logging option, the volume size and quantity of the log are defined to limit the storage occupied by the log.
In the logstash of the configuration file, we define a new pipeline named nginx_pipeline.conf
. This file is used to handle the collection, filtering and transmission of nginx logs. Similar to how ELK works, logstash will process the received logs based on different conditions and send them to the already created Elasticsearch cluster. In this configuration file, we define the following processing logic:
input { file { path => "/var/log/nginx/access.log" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}" } } } output { elasticsearch { hosts => [ "${ES_HOST}:9200" ] index => "nginx_log_index" } }
In this configuration file, we define an input named file, which means that we want to read data from the local Log file. Next, we introduced a filter that uses the grok library to parse logs that match a specific template. Finally, we define the output, which transfers data to the address of the Elasticsearch cluster, while passing retrieval and reporting into the container via the environment variable ES_HOST
.
In the end, after completing the entire ELK configuration as above, we will get an efficient log management system. Each log will be sent to a centralized place and integrated together, allowing for easy search. Filtering and visualization operations.
The above is the detailed content of How to use Docker for application monitoring and log management. For more information, please follow other related articles on the PHP Chinese website!

如何在FastAPI中实现请求日志记录和监控引言:FastAPI是一个基于Python3.7+的高性能Web框架,它提供了许多强大的功能和特性,包括自动化的请求和响应模型验证、安全性、性能优化等。在实际开发中,我们经常需要在应用程序中记录请求日志以便进行排错和监控分析。本文将介绍如何在FastAPI中实现请求日志记录和监控,并提供相应的代码示例。一、安装依

Linux下的实时日志监控与分析在日常的系统管理和故障排查中,日志是一个非常重要的数据来源。通过对系统日志的实时监控和分析,我们可以及时发现异常情况并进行相应的处理。本文将介绍Linux下如何进行实时日志监控和分析,并提供相应的代码示例。一、实时日志监控在Linux下,最常用的日志系统是rsyslog。通过配置rsyslog,我们可以实现将不同应用程序的日志

如果我们手头没有手机,只有电脑,但我们必须拍照,我们可以使用电脑内置的监控摄像头拍照,那么如何打开win10监控摄像头,事实上,我们只需要下载一个相机应用程序。打开win10监控摄像头的具体方法。win10监控摄像头打开照片的方法:1.首先,盘快捷键Win+i打开设置。2.打开后,进入个人隐私设置。3.然后在相机手机权限下打开访问限制。4.打开后,您只需打开相机应用软件。(如果没有,可以去微软店下载一个)5.打开后,如果计算机内置监控摄像头或组装了外部监控摄像头,则可以拍照。(因为人们没有安装摄

随着微服务架构的广泛应用,调用链监控已经成为了保障微服务健康运行的重要手段。而基于go-zero框架实现微服务调用链监控,则是更加高效可靠的实现方式。一、调用链监控的基本概念微服务架构中,一个请求可能经过多个微服务组件的调用,这些调用形成了一条调用链。而一旦某一个环节出现问题,整个服务甚至整个系统都有可能受到影响。因此,调用链监控这个技术,就是通过记录整条调

在当今的互联网时代,Web应用程序的高效稳定运行是非常重要的。然而,应用程序可能会出现故障或崩溃,影响用户体验。为了确保应用程序的正常运行,我们需要对其进行监控。本文将探讨如何使用Golang实现Web应用程序监控。一、Golang的Web应用程序监控工具Golang拥有非常适合Web应用程序监控的工具。其中最流行的就是Prometheus。Promethe

Nginx监控实时状态配置,实时查看网站运行引言:Nginx是一款非常流行的反向代理服务器,其高性能和高并发能力使得它成为了许多网站的首选。为了保证网站的稳定运行,我们需要时刻监控Nginx的运行状态。本篇文章将介绍如何配置Nginx实时状态监控,并通过示例代码来让读者更好地理解。一、安装Nginx状态监控模块要实现Nginx的实时状态监控,需要在Nginx

随着互联网的发展,web应用程序的性能监控以及安全分析越来越受到重视。nginx作为一款高性能的Web服务器和反向代理工具,其在性能监控和安全分析方面也受到广泛的关注和应用。本文将介绍一些Nginx性能监控和安全分析的辅助工具。Nginx性能监控工具NginxAmplifyNginxAmplify是Nginx公司推出的一款性能监控工具。该工具可以

如何在Linux上设置高可用的网络存储监控在现代的IT环境中,网络存储是一个关键组件,用于存储和管理海量的数据。为了确保数据的可靠性和高可用性,对网络存储的监控和故障恢复是非常重要的。本文将介绍如何在Linux上设置高可用的网络存储监控,并提供代码示例。第一步:安装监控工具在Linux上,我们可以使用一个开源的监控工具来监控网络存储,比如Nagios。首先,


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

SublimeText3 Linux new version
SublimeText3 Linux latest version

SublimeText3 Chinese version
Chinese version, very easy to use

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Mac version
God-level code editing software (SublimeText3)
