The methods I can think of so far:
Awk analyzes logs, sums conditions, and updates the database.
But when the log size is large, efficiency will be affected.
Is there a simpler way?
PHP中文网2017-05-16 17:31:35
Welcome to try our http_accounting module, which is available in the third-party module list on the nginx official website~
大家讲道理2017-05-16 17:31:35
Let’s talk about our plan, the flow is 1.9kw
1. The front desk records the transmission log via <img src="/tj.html"/>
2. ningx
单独记录 tj.html
’s access log
3. syslog
Scheduled and divided into 1 minute intervals
4. cronjob
定时1
Minutes to process and analyze the divided logs
Now we use an update every 1 minutemysql
数据库,正在打算将当天的数据存储方式放到redis上,而将历史记录放到mongodb
Up
黄舟2017-05-16 17:31:35
As long as the log is cut regularly, the files processed each time will not be very large.
Then I write a small program to do statistics, which is more efficient.
If you have more flexible query requirements, you can also record the log information into the database, establish an index based on time and necessary fields, and query directly with SQL.