Home  >  Article  >  Backend Development  >  PHP implementation code for recording spider crawling history

PHP implementation code for recording spider crawling history

WBOY
WBOYOriginal
2016-07-25 08:56:281128browse
This article introduces a piece of code implemented in PHP to record spider crawling history. It is good for analyzing spider crawling records of websites. It is also good for researching SEO. Friends in need can refer to it.

The code for PHP to record spider crawling history is as follows:

<? function 
//记录蜘蛛爬行记录,判断来自哪种搜索引擎
get_naps_bot()
{
$useragent = strtolower($_SERVER['HTTP_USER_AGENT']);

if (strpos($useragent, 'googlebot') !== false){
return 'Google';
}

if (strpos($useragent, 'baiduspider') !== false){
return 'Baidu';
}
if (strpos($useragent, 'msnbot') !== false){
return 'Bing';
}

if (strpos($useragent, 'slurp') !== false){
return 'Yahoo';
}

if (strpos($useragent, 'sosospider') !== false){
return 'Soso';
}

if (strpos($useragent, 'sogou spider') !== false){
return 'Sogou';
}

if (strpos($useragent, 'yodaobot') !== false){
return 'Yodao';
}
return false;
}

function nowtime(){
$date=date("Y-m-d.G:i:s");
return $date;
}

$searchbot = get_naps_bot();

//保存爬行记录
if ($searchbot) {
   $tlc_thispage = addslashes($_SERVER['HTTP_USER_AGENT']);
   $url=$_SERVER['HTTP_REFERER'];
   $file="robotlog.txt";
   $time=nowtime();
   $data=fopen($file,"a");
   fwrite($data,"Time:$time robot:$searchbot URL:$tlc_thispage\n");
    fclose($data);
}
?>


Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn