Kibana Logstash Elasticsearch log query system, kibanalogstash
The purpose of building this platform is to facilitate log query for operation, maintenance and research and development. Kibana is a free web shell; Logstash integrates various log collection plug-ins and is an excellent regular cutting log tool; Elasticsearch is an open source search engine framework (supports cluster architecture).
1 Installation requirements
1.1 Theoretical Topology
1.2 Installation environment
1.2.1 Hardware environment
192.168.50.62 (HP DL 385 G7, RAM: 12G, CPU: AMD 6128, DISK: SAS 146*4)
192.168.50.98 (HP DL 385 G7, RAM: 12G, CPU: AMD 6128, DISK: SAS 146*6)
192.168.10.42 (Xen virtual machine, RAM: 8G, CPU: ×4, DISK: 100G)
1.2.2 Operating System
CentOS 5.6 X64
1.2.3 Web-server basic environment
Nginx php (the installation process is skipped)
1.2.4 Software List
JDK 1.6.0_25
logstash-1.1.0-monolithic.jar
elasticsearch-0.18.7.zip
redis-2.4.12.tar.gz
kibana
1.3 How to obtain
1.3.1 Jdk acquisition path
http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u25-download-346242.html
1.3.2 Logstash acquisition path
http://semicomplete.com/files/logstash/logstash-1.1.0-monolithic.jar
1.3.3 Elasticsearch acquisition path
https://github.com/downloads/elasticsearch/elasticsearch/ elasticsearch-0.18.7.zip
1.3.4 Kibana Get Path
http://github.com/rashidkpc/Kibana/tarball/master
2 Installation steps
2.1 JDK download and installation
Basic installation
wget http://download.oracle.com/otn-pub/java/jdk/6u25-b06/jdk-6u25-linux-x64.bin
sh jdk-6u25-linux-x64.bin
mkdir -p /usr/java
mv ./jdk1.6.0_25 /usr/java
ln –s /usr/java/jdk1.6.0_25 /usr/java/default
Edit the /etc/profile file and add the following lines
export JAVA_HOME=/usr/java/default
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATH
Refresh environment variables
source /etc/profile
2.2 Redis download and installation
wget http://redis.googlecode.com/files/redis-2.4.14.tar.gz
make –j24
make install
mkdir -p /data/redis
cd /data/redis/
mkdir {db,log,etc}
2.3 Elasticsearch download and installation
cd /data/
mkdir –p elasticsearch && cd elasticsearch
wget --no-check-certificate https://github.com/downloads/elasticsearch/elasticsearch/ elasticsearch-0.18.7.zip
unzip elasticsearch-0.18.7.zip
2.4 Logstash download and installation
mkdir –p /data/logstash/ && cd /data/logstash
wget http://semicomplete.com/files/logstash/logstash-1.1.0-monolithic.jar
2.5 Kibana download and installation
wget http://github.com/rashidkpc/Kibana/tarball/master --no-check-certificate
tar zxvf master
3 Related configuration and startup
3.1 Redis configuration and startup
3.1.1 Configuration File
vim /data/redis/etc/redis.conf
#--------------------------------------------- ------
#this is the config file for redis
pidfile /var/run/redis.pid
port 6379
timeout 0
loglevel verbose
logfile /data/redis/log/redis.log
databases 16
save 900 1
save 300 10
save 60 10000
rdbcompression yes
dbfilename dump.rdb
dir /data/redis/db/
slave-serve-stale-data yes
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
slowlog-log-slower-than 10000
slowlog-max-len 128
vm-enabled no
vm-swap-file /tmp/redis.swap
vm-max-memory 0
vm-page-size 32
vm-pages 134217728
vm-max-threads 4
hash-max-zipmap-entries 512
hash-max-zipmap-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
3.1.2 Redis startup
[logstash@Logstash_2 redis]# redis-server /data/redis/etc/redis.conf &
3.2 Elasticsearch configuration and startup
3.2.1 Elasticsearch startup
[logstash@Logstash_2 redis]# /data/elasticsearch/elasticsearch-0.18.7/bin/elasticsearch –p ../esearch.pid &
3.2.2 Elasticsearch cluster configuration
curl 127.0.0.1:9200/_cluster/nodes/192.168.50.62
3.3 Logstash configuration and startup
3.3.1 Logstash configuration file
input {
redis {
host => "192.168.50.98"
data_type =>"list"
key => "logstash:redis"
type => "redis-input"
}
}
filter {
grok {
type => "linux-syslog"
pattern => "%{SYSLOGLINE}"
}
grok {
type => "nginx-access"
pattern => "%{NGINXACCESSLOG}"
}
}
output {
elasticsearch {
host =>"192.168.50.62"
}
}
3.3.2 Logstash starts as Index
java -jar logstash.jar agent -f my.conf &
3.3.3 Logstash starts as agent
Configuration file
input {
file{
type => "linux-syslog"
path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
}
file {
type => "nginx-access"
path => "/usr/local/nginx/logs/access.log"
}
file {
type => "nginx-error"
path => "/usr/local/nginx/logs/error.log"
}
}
output {
redis {
host => "192.168.50.98"
data_type =>"list"
key => "logstash:redis"
}
}
Agent starts
java -jar logstash-1.1.0-monolithic.jar agent -f shipper.conf &
3.3.4 kibana configuration
First add site configuration in nginx
server {
listen 80;
server_name logstash.test.com;
index index.php;
root /usr/local/nginx/html;
#charset koi8-r;
#access_log logs/host.access.log main;
location ~ .*.(php|php5)$
{
#fastcgi_pass unix:/tmp/php-cgi.sock;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi.conf;
}
}
4 Performance Tuning
4.1 Elasticsearch Tuning
4.1.1 JVM Tuning
Edit the Elasticsearch.in.sh file
ES_CLASSPATH=$ES_CLASSPATH:$ES_HOME/lib/*:$ES_HOME/lib/sigar/*
if [ "x$ES_MIN_MEM" = "x" ]; then
ES_MIN_MEM=4g
fi
if [ "x$ES_MAX_MEM" = "x" ]; then
ES_MAX_MEM=4g
fi
4.1.2 Elasticsearch Index Compression
vim index_elastic.sh
#!/bin/bash
#comperssion the data for elasticsearch now
date=` date %Y.%m.%d `
# compression the new index;
/usr/bin/curl -XPUT http://localhost:9200/logstash-$date/nginx-access/_mapping -d '{"nginx-access" : {"_source" : { "compress " : true }}}'
echo ""
/usr/bin/curl -XPUT http://localhost:9200/logstash-$date/nginx-error/_mapping -d '{"nginx-error" : {"_source" : { "compress " : true }}}'
echo ""
/usr/bin/curl -XPUT http://localhost:9200/logstash-$date/linux-syslog/_mapping -d '{"linux-syslog" : {"_source" : { "compress " : true }}}'
echo ""
Save the script and execute it
sh index_elastic.sh
5 Use
5.1 Logstash query page
Use Firefox or Google Chrome to visit http://logstash.test.com

win10的日志可以帮助用户详细的了解系统使用情况,很多的用户在寻找自己的管理日志的时候,肯定都遇到过日志6013吧,那么这个代码的意思是什么呢,下面就来介绍一下。win10日志6013是什么:1、这个是正常的日志。这个日志的信息并不是表示你的计算机重启了,而是说明自从上次启动以来,系统运行了多长的时间了。该日志会每天12点整出现一次。如何查看系统运行多长时间了,可以在cmd中输入systeminfo。其中有一行就是。

作用是:给工程师们反馈使用信息与记录便于分析问题(开发时使用的);由于用户本身不是经常产生上传日志,所以对用户无用。日志记录缓冲区是小型的、用于短期存储将写入到磁盘上的重做日志的变更向量的临时区域。日志缓冲区对磁盘的一次写入是来自多个事务的一批变更向量。即使如此,日志缓冲区中的变更向量也是接近实时地写入磁盘,当会话发出COMMIT语句时,会实时执行日志缓冲区写操作。

win10的日志可以帮助用户详细的了解系统使用情况,很多的用户在寻找自己的管理日志的时候,肯定都看到过很多的错误日志吧,那么该怎么解决他们呢,下面就一起来看看吧。win10日志事件7034怎么解决:1、点击“开始”打开“控制面板”2、找到“管理工具”3、点击“服务”4、找到HDZBCommServiceForV2.0右击“停止服务”,并改为“手动启动”

随着互联网和Web应用的迅猛发展,日志管理越来越重要。在开发Web应用时,如何查找和定位问题是一个非常关键的问题。日志系统是一种非常有效的工具,可以帮助我们实现这些任务。ThinkPHP6提供了一个强大的日志系统,可以帮助应用程序开发人员更好地管理和跟踪应用程序中发生的事件。本文将介绍如何在ThinkPHP6中使用日志系统,以及如何利用日志系统

linux查看日志的三种命令分别是:1、tail命令,该命令可以实时查看文件内容的变以及日志文件;2、multitail命令,该命令可以同时监视多个日志文件;3、less命令,该命令可以快速查看日志的更改,并且不会使屏幕混乱。

win10的日志有着很多丰富的内容,很多的用户在寻找自己的管理日志的时候,肯定都见到过事件ID455显示错误,那么它到底是什么意思呢,下面就一起来看看。win10日志中事件ID455是什么:1、ID455是信息存储打开日志文件时<文件>发生的错误<错误>

iPhone可让您在“健康”App中添加药物,以便跟踪和管理您每天服用的药物、维生素和补充剂。然后,您可以在设备上收到通知时记录已服用或跳过的药物。记录用药后,您可以查看您服用或跳过用药的频率,以帮助您跟踪自己的健康状况。在这篇文章中,我们将指导您在iPhone上的健康应用程序中查看所选药物的日志历史记录。如何在“健康”App中查看用药日志历史记录简短指南:前往“健康”App>浏览“>用药”>用药“>选择一种用药>”选项“&a

在Java应用程序开发中,监控系统的运行情况是非常重要的。通过日志框架记录关键信息、异常和性能指标,可以及时捕获问题,进行故障排除,并优化系统性能。下面将介绍如何利用Java的日志框架监控系统运行情况,并提供一些实践技巧和经验。一、选择适合的日志框架1、常见日志框架:常见的Java日志框架包括Log4j、Logback和java.util.logging等。2、特点比较:不同的日志框架具有不同的特点。例如,Log4j具有灵活的配置和丰富的输出格式,Logback是Log4j的继任者并采用了更先进


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Atom editor mac version download
The most popular open source editor
