Home >Backend Development >PHP Tutorial >PHP code sharing for grabbing spider crawler traces
This article introduces a piece of code in PHP to capture spider crawler traces. Friends in need can refer to it.
Use PHP code to analyze spider crawler traces in web logs. The code is as follows: <?php //获取蜘蛛爬虫名或防采集 //by bbs.it-home.org function isSpider(){ $bots = array( 'Google' => 'googlebot', 'Baidu' => 'baiduspider', 'Yahoo' => 'yahoo slurp', 'Soso' => 'sosospider', 'Msn' => 'msnbot', 'Altavista' => 'scooter ', 'Sogou' => 'sogou spider', 'Yodao' => 'yodaobot' ); $userAgent = strtolower($_SERVER['HTTP_USER_AGENT']); foreach ($bots as $k => $v){ if (strstr($v,$userAgent)){ return $k; break; } } return false; } //by bbs.it-home.org //获取哪种蜘蛛爬虫后保存蜘蛛痕迹。 //根据采集时HTTP_USER_AGENT是否为空来防止采集 //抓蜘蛛爬虫 --by bbs.it-home.org $spi = isSpider(); if($spi){ $tlc_thispage = addslashes($_SERVER['HTTP_USER_AGENT']); $file = 'robot.txt'; $time = date('Y-m-d H:i:s',mktime()); $handle = fopen($file,'a+'); $PR = $_SERVER['REQUEST_URI']; fwrite($handle, "Time:{$time} ROBOT:{$spi} AGENT:{$tlc_thispage} URL:{$PR} \n\r"); fclose($handle); } ?> |