Home  >  Article  >  Backend Development  >  PHP code sharing for grabbing spider crawler traces

PHP code sharing for grabbing spider crawler traces

WBOY
WBOYOriginal
2016-07-25 08:57:301125browse
This article introduces a piece of code in PHP to capture spider crawler traces. Friends in need can refer to it.

Use PHP code to analyze spider crawler traces in web logs. The code is as follows:

<?php
//获取蜘蛛爬虫名或防采集
//by bbs.it-home.org
function isSpider(){
    $bots = array(
                    'Google'    => 'googlebot',
                    'Baidu'        => 'baiduspider',
                    'Yahoo'        => 'yahoo slurp',
                    'Soso'        => 'sosospider',
                    'Msn'        => 'msnbot',
                    'Altavista'    => 'scooter ',
                    'Sogou'        => 'sogou spider',
                    'Yodao'        => 'yodaobot'
            );
    $userAgent = strtolower($_SERVER['HTTP_USER_AGENT']);
    foreach ($bots as $k => $v){
        if (strstr($v,$userAgent)){
            return $k;
            break;
        }
    }
    return false;
} //by bbs.it-home.org

//获取哪种蜘蛛爬虫后保存蜘蛛痕迹。
//根据采集时HTTP_USER_AGENT是否为空来防止采集
//抓蜘蛛爬虫 --by bbs.it-home.org
$spi    = isSpider();
if($spi){
    $tlc_thispage    = addslashes($_SERVER['HTTP_USER_AGENT']);
    $file            = 'robot.txt';
    $time            = date('Y-m-d H:i:s',mktime());
    $handle            = fopen($file,'a+');
    $PR                = $_SERVER['REQUEST_URI'];
    fwrite($handle, "Time:{$time} ROBOT:{$spi} AGENT:{$tlc_thispage} URL:{$PR} \n\r");
    fclose($handle);
}
?>


Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn