When dealing with large file processing, the infamous PHP Fatal error: Allowed memory exhausted error can be a recurring issue. This problem arises when file_get_contents() attempts to read the entire contents of a sizeable file into memory, often exceeding the allocated memory limit.
Instead of loading the entire file into memory, a more efficient approach is to open the file as a pointer and read it in smaller chunks using fread(). This allows for memory management, which is critical for handling large files.
Below is a custom function that mimics the functionality of Node.js's file processing API:
<code class="php">function file_get_contents_chunked($file, $chunk_size, $callback) { try { $handle = fopen($file, "r"); $i = 0; while (!feof($handle)) { call_user_func_array($callback, array(fread($handle, $chunk_size), &$handle, $i)); $i++; } } catch (Exception $e) { trigger_error("file_get_contents_chunked::" . $e->getMessage(), E_USER_NOTICE); return false; } fclose($handle); return true; }</code>
This function accepts three parameters: the file path, the desired chunk size, and a callback function that will be called for each chunk read.
The file_get_contents_chunked() function can be used as follows:
<code class="php">$success = file_get_contents_chunked("my/large/file", 4096, function ($chunk, &$handle, $iteration) { /* Process the chunk here... */ });</code>
Performing multiple regex operations on a large chunk of data is inefficient. Consider using native string manipulation functions like strpos(), substr(), trim(), and explode().
Instead of:
<code class="php">$newData = str_replace("^M", "", $myData);</code>
Use:
<code class="php">$pattern = '/\r\n/'; $replacement = ''; $newData = preg_replace($pattern, $replacement, $myData);</code>
By utilizing the aforementioned techniques, it is possible to effectively process large files without encountering memory exhaustion errors.
以上是對大檔案使用 file_get_contents() 時如何避免記憶體耗盡錯誤?的詳細內容。更多資訊請關注PHP中文網其他相關文章!