Home  >  Article  >  Backend Development  >  Python multi-threaded PHP multi-threaded web page crawling implementation code

Python multi-threaded PHP multi-threaded web page crawling implementation code

WBOY
WBOYOriginal
2016-07-29 08:43:191083browse

Limited by the fact that the PHP language itself does not support multi-threading, the efficiency of developing crawler programs is not high. At this time, it is often necessary to use Curl Multi Functions, which can achieve concurrent multi-threaded access to multiple URL addresses. Since Curl Multi Function is so powerful, can you use Curl Multi Functions to write concurrent multi-threaded file downloads? Of course you can. My code is given below:
Code 1: Write the obtained code directly into a file

Copy the code The code is as follows:


$urls = array(
'http://www.sina.com.cn/',
'http://www.sohu.com/',
'http://www.163.com/'
); // Set the URL of the page to be crawled
$save_to='/test.txt'; // Write the crawled code into this file
$st = fopen($save_to,"a");
$mh = curl_multi_init();
foreach ($urls as $i => $url) {
$conn[$i] = curl_init($url);
curl_setopt ($conn[$i], CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)");
curl_setopt($conn[$i], CURLOPT_HEADER ,0);
curl_setopt($conn[$i ], CURLOPT_CONNECTTIMEOUT,60);
curl_setopt($conn[$i], CURLOPT_FILE,$st); // Set the crawled code to be written to the file
curl_multi_add_handle ($mh,$conn[$i]);
} // Initialize
do {
curl_multi_exec($mh,$active);
} while ($active); // Execute
foreach ($urls as $i => $url) {
curl_multi_remove_handle($mh,$conn [$i]);
curl_close($conn[$i]);
} // End the cleanup
curl_multi_close($mh);
fclose($st);
?>


Code 2: Will get The code is first put into variables and then written to a file

Copy the code The code is as follows:


$urls = array(
'http://www.sina.com.cn/ ',
'http://www.sohu.com/',
'http://www.163.com/'
);
$save_to='/test.txt'; // Put the captured code Write to the file
$st = fopen($save_to,"a");
$mh = curl_multi_init();
foreach ($urls as $i => $url) {
$conn[$i] = curl_init ($url);
curl_setopt($conn[$i], CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)");
curl_setopt($conn[$i], CURLOPT_HEADER ,0);
curl_setopt($conn[$i], CURLOPT_CONNECTTIMEOUT,60);
curl_setopt($conn[$i],CURLOPT_RETURNTRANSFER,true); // Set the crawling code not to be written to the browser, but converted into a string
curl_multi_add_handle ($mh,$conn[$i]);
}
do {
curl_multi_exec($mh,$active);
} while ($active);
foreach ($urls as $i => $url) {
$data = curl_multi_getcontent($conn[$i]); // Get the crawled code string
fwrite($st,$data); // Write the string to the file. Of course, it is also possible not to write to a file, such as storing it in a database
} // Obtain data variables and write to the file
foreach ($urls as $i => $url) {
curl_multi_remove_handle($mh,$conn[$ i]);
curl_close($conn[$i]);
}
curl_multi_close($mh);
fclose($st);
?>

The above introduces the python multi-threaded PHP multi-threaded web page crawling implementation code, including the content of python multi-threading. I hope it will be helpful to friends who are interested in PHP tutorials.

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn