一直以來網頁解析和爬蟲的製作熱情絲毫未減今天用開源的simple_html_dom.php解析框架做了一隻爬蟲:
<?php /* *.Pho spider v1.0 *.Written by Radish.ghost 2015.1.20 */ //error_reporting(1); //close error report //curl model //I will realize it in later versions include_once("simple_html_dom.php"); $html=file_get_html('http://www.baidu.com');//The url which you want dig $tmp=array();//Save the url in the first dig foreach($html->find('a') as $e) { $f=$e->href; //if($f[10]==':')continue; if($f[0]=='/')$f='http://www.baidu.com'.$f;//Completion the url if($f[4]=='s')continue;//If the url is "https://" continue (the simple_html_dom might can't prase the https:// url) if(stripos($f,"baidu")==FALSE)continue;//If the url not in this website continue echo $f . '<br>'; $tmp[$cun++]=$f; //Save the urls into array } foreach($tmp as $r) //Dig the urls in $tmp[] { $html2=file_get_html($r); //Redo the step foreach($html2->find('a') as $a) { $u=$a->href; if($u[0]=='/')$u='http://www.baidu.com'.$u; if($u[4]=='s')continue; if(stripos($u,"baidu")==FALSE)continue; echo $u.'<br>'; } $html2=null; } ?>
//最後總會出現一個Fatal error: Call to a member function find() on a non-object in D:xampphtdocshtmlindex.php on line 21 的警告與學長溝通後改正了很多小錯誤不過這個仍然沒有解決希望有大神能夠指點一下
-------------------- -分割線--------------------
simple_html_dom下載:
https://github.com/Ph0enixxx/simple_html_dom
= =家裡電腦用不了git4win
以上就介紹了 php 自製基於simple_html_dom的爬蟲一隻v1.0,包含了方面的內容,希望對PHP教程有興趣的朋友有所幫助。