Home >Backend Development >PHP Tutorial >How Can I Download Large Files with Curl Without Memory Overload?
Downloading Large Files with Curl Without Memory Overload
When downloading large remote files using cURL, the default behavior of reading the entire file into memory can become problematic, especially for extensive data. To address this challenge, consider the following optimized approach:
Instead of reading the downloaded file into memory, we can stream it directly to disk using the following modified code:
<?php set_time_limit(0); // Specify the destination file to save the download $fp = fopen(dirname(__FILE__) . '/localfile.tmp', 'w+'); // Replace spaces in the URL with %20 for proper handling $ch = curl_init(str_replace(" ", "%20", $url)); // Set a high timeout value to prevent interruptions while downloading large files curl_setopt($ch, CURLOPT_TIMEOUT, 600); // Stream curl response to disk curl_setopt($ch, CURLOPT_FILE, $fp); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); // Execute the download and close all resources curl_exec($ch); curl_close($ch); fclose($fp); ?>
This code snippet initializes cURL, sets an appropriate timeout, and configures it to write the response directly to the specified file instead of loading it into memory. By streaming the download to disk, you can significantly reduce memory usage while handling large files.
The above is the detailed content of How Can I Download Large Files with Curl Without Memory Overload?. For more information, please follow other related articles on the PHP Chinese website!