Home  >  Article  >  Backend Development  >  PHP code to obtain the crawling records of each search spider

PHP code to obtain the crawling records of each search spider

WBOY
WBOYOriginal
2016-07-25 08:57:251160browse
This article introduces a piece of code that uses PHP to obtain the crawling records of major search spiders. Friends in need can refer to it.

Supports the following search engines: Baidu, Google, Bing, Yahoo, Soso, Sogou, Yodao crawling website records!

Code:

<?php 
/**
* 获取搜索引擎爬行记录
* edit by bbs.it-home.org
*/
function get_naps_bot() 
{ 
$useragent = strtolower($_SERVER['HTTP_USER_AGENT']); 
if (strpos($useragent, 'googlebot') !== false){ 
return 'Google'; 
} 
if (strpos($useragent, 'baiduspider') !== false){ 
return 'Baidu'; 
} 
if (strpos($useragent, 'msnbot') !== false){ 
return 'Bing'; 
} 
if (strpos($useragent, 'slurp') !== false){ 
return 'Yahoo'; 
} 
if (strpos($useragent, 'sosospider') !== false){ 
return 'Soso'; 
} 
if (strpos($useragent, 'sogou spider') !== false){ 
return 'Sogou'; 
} 
if (strpos($useragent, 'yodaobot') !== false){ 
return 'Yodao'; 
} 
return false; 
} 
function nowtime(){ 
$date=date("Y-m-d.G:i:s"); 
return $date; 
} 
$searchbot = get_naps_bot(); 
if ($searchbot) { 
$tlc_thispage = addslashes($_SERVER['HTTP_USER_AGENT']); 
$url=$_SERVER['HTTP_REFERER']; 
$file="bbs.it-home.org.txt"; 
$time=nowtime(); 
$data=fopen($file,"a"); 
fwrite($data,"Time:$time robot:$searchbot URL:$tlc_thispage\n"); 
fclose($data); 
} 
?>


Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn