Home >Operation and Maintenance >Linux Operation and Maintenance >How to configure highly available container log management on Linux

How to configure highly available container log management on Linux

王林
王林Original
2023-07-06 15:42:18736browse

How to configure high-availability container log management on Linux

With the rapid development of container technology, more and more enterprises are adopting containerized deployment to improve the scalability and reliability of the system. In a containerized environment, in order to facilitate management and monitoring of the running status of containers, it is very important to centrally manage container logs.

This article will introduce how to configure high-availability container log management on Linux, and come with code examples to help readers better understand and practice.

1. Choose the appropriate log management tool

When choosing a container log management tool, you need to consider the following aspects:

  1. Support containerized environments: Select A log management tool that supports containerized environments and can easily collect and analyze container log data.
  2. High availability: In order to ensure the continuous availability of container logs, you need to choose a log management tool that supports high availability to prevent log data loss or interruption.
  3. Easy to use and deploy: Choosing a log management tool that is easy to use and deploy can reduce the work pressure of system administrators.

Common container log management tools include ELK (Elasticsearch, Logstash, Kibana), Fluentd and Prometheus, etc.

2. Install and configure ELK (Elasticsearch, Logstash, Kibana)

ELK is a popular container log management tool, consisting of three components: Elasticsearch, Logstash and Kibana. The following uses CentOS as an example to introduce how to install and configure ELK.

  1. Install Elasticsearch
sudo yum install java-1.8.0-openjdk -y
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
sudo tee /etc/yum.repos.d/elasticsearch.repo <<EOF
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF

sudo yum install elasticsearch -y
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
  1. Install Logstash
sudo tee /etc/yum.repos.d/logstash.repo <<EOF
[logstash]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF

sudo yum install logstash -y
sudo systemctl enable logstash
sudo systemctl start logstash
  1. Install Kibana
sudo tee /etc/yum.repos.d/kibana.repo <<EOF
[kibana]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF

sudo yum install kibana -y
sudo systemctl enable kibana
sudo systemctl start kibana
  1. Configuring Logstash

In the Logstash configuration file /etc/logstash/conf.d/logstash.conf, add the following content:

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
  }
}
  1. Configuring Kibana

In Kibana’s configuration file /etc/kibana/kibana.yml, add the following content:

server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]

Restart Logstash and Kibana service:

sudo systemctl restart logstash
sudo systemctl restart kibana

Now that ELK has been installed and configured, you can access and query container log data through Kibana's web interface.

3. Use Fluentd for container log management

Fluentd is another popular container log management tool. Its design concept is simple, lightweight and scalable. The following uses Ubuntu as an example to introduce how to install and configure Fluentd.

  1. Install Fluentd
curl -L https://toolbelt.treasuredata.com/sh/install-ubuntu-focal-td-agent4.sh | sh
sudo systemctl enable td-agent
sudo systemctl start td-agent
  1. Configure Fluentd

Edit Fluentd’s configuration file/etc/td-agent/ td-agent.conf, add the following content:

<source>
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/td-agent/td-agent.log.pos
  tag kube.*
  format json
  time_format %Y-%m-%dT%H:%M:%S.%NZ
  read_from_head true
</source>

<match label1.**>
  @type elasticsearch
  host localhost
  port 9200
  logstash_format true
  flush_interval 5s
</match>

Restart the Fluentd service:

sudo systemctl restart td-agent

Now, Fluentd has been installed and configured, and container log data can be collected and stored.

Conclusion

Container log management is very important to ensure the stable operation and troubleshooting of the container environment. This article describes how to configure highly available container log management on Linux and provides installation and configuration examples of ELK and Fluentd. Readers can choose the appropriate tool for container log management according to their own needs, and configure and use it according to the examples.

Reference:

  • https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started-install.html
  • https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
  • https://www.elastic.co/guide/en/kibana/current/rpm. html
  • https://fluentbit.io/
  • https://docs.fluentd.org/v1.0/articles/docker-logging-efk-compose

The above is the detailed content of How to configure highly available container log management on Linux. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn