Home > Article > Backend Development > PHP implementation code for recording spider crawling history
This article introduces a piece of code implemented in PHP to record spider crawling history. It is good for analyzing spider crawling records of websites. It is also good for researching SEO. Friends in need can refer to it.
The code for PHP to record spider crawling history is as follows: <? function //记录蜘蛛爬行记录,判断来自哪种搜索引擎 get_naps_bot() { $useragent = strtolower($_SERVER['HTTP_USER_AGENT']); if (strpos($useragent, 'googlebot') !== false){ return 'Google'; } if (strpos($useragent, 'baiduspider') !== false){ return 'Baidu'; } if (strpos($useragent, 'msnbot') !== false){ return 'Bing'; } if (strpos($useragent, 'slurp') !== false){ return 'Yahoo'; } if (strpos($useragent, 'sosospider') !== false){ return 'Soso'; } if (strpos($useragent, 'sogou spider') !== false){ return 'Sogou'; } if (strpos($useragent, 'yodaobot') !== false){ return 'Yodao'; } return false; } function nowtime(){ $date=date("Y-m-d.G:i:s"); return $date; } $searchbot = get_naps_bot(); //保存爬行记录 if ($searchbot) { $tlc_thispage = addslashes($_SERVER['HTTP_USER_AGENT']); $url=$_SERVER['HTTP_REFERER']; $file="robotlog.txt"; $time=nowtime(); $data=fopen($file,"a"); fwrite($data,"Time:$time robot:$searchbot URL:$tlc_thispage\n"); fclose($data); } ?> |