Home >Backend Development >PHP Tutorial >Introduction to various methods of reading large files with PHP_php tips

Introduction to various methods of reading large files with PHP_php tips

WBOY
WBOYOriginal
2016-05-16 19:54:561772browse

Reading large files has always been a headache. For example, if we use PHP to develop and read small files, we can directly use various functions to achieve it. However, once we get to a large article, we will find that the commonly used methods cannot be used normally or take too long. It's too stuck. Let's take a look at the solution to the problem of reading large files in PHP. I hope the example can help you.

In PHP, when reading files, the fastest way is to use some functions such as file and file_get_contents. A few simple lines of code can beautifully complete the functions we need. But when the file being operated is a relatively large file, these functions may be insufficient. The following will start with a requirement to explain the commonly used operating methods when reading large files.

Requirements:
There is an 800M log file with about 5 million lines. Use PHP to return the contents of the last few lines.

Implementation method:

1. Directly use the file function to operate
Since the file function reads all the content into memory at once, and PHP is limited to the maximum usage by default in order to prevent some poorly written programs from taking up too much memory and causing insufficient system memory and causing the server to crash. Memory 16M, which is set through memory_limit = 16M in php.ini. If this value is set to -1, the memory usage is not limited.

The following is a piece of code that uses file to extract the last line of this file:

<&#63;php
  ini_set('memory_limit', '-1');
  $file = 'access.log';
  $data = file($file);
  $line = $data[count($data) - 1];
  echo $line;
&#63;>

The entire code execution takes 116.9613 (s).
My machine has 2G of memory. When I press F5 to run, the system turns gray and only recovers after almost 20 minutes. It can be seen that the consequences of reading such a large file directly into the memory are very serious, so it is not a last resort. Therefore, the memory_limit cannot be adjusted too high, otherwise the only choice is to call the computer room and ask the machine to be reset.

2. Directly call the Linux tail command to display the last few lines
Under the Linux command line, you can directly use tail -n 10 access.log to easily display the last few lines of the log file. You can directly use PHP to call the tail command. The execution PHP code is as follows:

<&#63;php
  $file = 'access.log';
  $file = escapeshellarg($file); // 对命令行参数进行安全转义
  $line = `tail -n 1 $file`;
  echo $line;
&#63;>

The entire code execution takes 0.0034 (s)

3. Directly use PHP’s fseek to perform file operations
This method is the most common method. It does not need to read all the contents of the file, but operates directly through pointers, so the efficiency is quite efficient. When using fseek to operate files, there are many different methods, and the efficiency may be slightly different. The following are two commonly used methods:

Method 1
First find the last EOF of the file through fseek, then find the starting position of the last line, get the data of this line, then find the starting position of the next line, then take the position of this line, and so on, until $num is found OK.
The implementation code is as follows

<&#63;php
$fp = fopen($file, "r");
$line = 10;
$pos = -2;
$t = " ";
$data = "";
while ($line > 0)
{
 while ($t != "\n")
 {
 fseek($fp, $pos, SEEK_END);
 $t = fgetc($fp);
 $pos--;
 }
 $t = " ";
 $data .= fgets($fp);
 $line--;
}
fclose($fp);
echo $data
&#63;>

The entire code execution takes 0.0095 (s)

Method 2
Still use fseek to read from the end of the file, but this time it is not reading bit by bit, but reading piece by piece. Every time a piece of data is read, the read data is placed in a buf, and then passed The number of newline characters (\n) is used to determine whether the last $num rows of data have been read.
The implementation code is as follows

<&#63;php
$fp = fopen($file, "r");
$num = 10;
$chunk = 4096;
$fs = sprintf("%u", filesize($file));
$max = (intval($fs) == PHP_INT_MAX) &#63; PHP_INT_MAX : filesize($file);
for ($len = 0; $len < $max; $len += $chunk)
{
 $seekSize = ($max - $len > $chunk) &#63; $chunk : $max - $len;
 fseek($fp, ($len + $seekSize) * -1, SEEK_END);
 $readData = fread($fp, $seekSize) . $readData;
 if (substr_count($readData, "\n") >= $num + 1)
 {
 preg_match("!(.*&#63;\n){" . ($num) . "}$!", $readData, $match);
 $data = $match[0];
 break;
 }
}
fclose($fp);
echo $data;
&#63;>

The entire code execution takes 0.0009(s).

Method 3

<&#63;php
function tail($fp, $n, $base = 5)
{
 assert($n > 0);
 $pos = $n + 1;
 $lines = array();
 while (count($lines) <= $n)
 {
 try
 {
  fseek($fp, -$pos, SEEK_END);
 }
 catch (Exception $e)
 {
  fseek(0);
  break;
 }
 $pos *= $base;
 while (!feof($fp))
 {
  array_unshift($lines, fgets($fp));
 }
 }
 return array_slice($lines, 0, $n);
}
var_dump(tail(fopen("access.log", "r+"), 10));
&#63;>

The entire code execution takes 0.0003(s)

Method 4, PHP’s stream_get_line function, reads quickly. It takes about 20 seconds to read a large file with 500,000 data! The example code is as follows

$fp = fopen('./iis.log', 'r'); //文件 
while (!feof($fp)) { 
 //for($j=1;$j<=1000;$j++) {     //读取下面的1000行并存储到数组中 
 $logarray[] = stream_get_line($fp, 65535, "\n"); 
    // break;
 // } 
 
 }

The above are the four methods of reading large files in PHP. I hope it will be helpful to everyone's learning.

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn