Maison >php教程 >php手册 >PHP代码实现爬虫记录超管用,php代码爬虫

PHP代码实现爬虫记录超管用,php代码爬虫

WBOY
WBOYoriginal
2016-06-13 08:56:432080parcourir

PHP代码实现爬虫记录——超管用,php代码爬虫

实现爬虫记录本文从创建crawler 数据库,robot.php记录来访的爬虫从而将信息插入数据库crawler,然后从数据库中就可以获得所有的爬虫信息。实现代码具体如下:

数据库设计

create table crawler  
(  
 crawler_ID bigint() unsigned not null auto_increment primary key,
 crawler_category varchar() not null,
 crawler_date datetime not null default '-- ::',
 crawler_url varchar() not null,
 crawler_IP varchar() not null
)default charset=utf;

以下文件 robot.php 记录来访的爬虫,并将信息写入数据库:

<&#63;php
 $ServerName = $_SERVER["SERVER_NAME"] ; 
 $ServerPort = $_SERVER["SERVER_PORT"] ; 
 $ScriptName = $_SERVER["SCRIPT_NAME"] ; 
 $QueryString = $_SERVER["QUERY_STRING"]; 
 $serverip = $_SERVER["REMOTE_ADDR"] ; 
 $Url="http://".$ServerName;
 if ($ServerPort != "")
 {
  $Url = $Url.":".$ServerPort ;
 } 
 $Url=$Url.$ScriptName;
 if ($QueryString !="")
 {
  $Url=$Url."&#63;".$QueryString;
 }  
 $GetLocationURL=$Url ;
 $agent = $_SERVER["HTTP_USER_AGENT"]; 
 $agent=strtolower($agent);
 $Bot ="";
 if (strpos($agent,"bot")>-)
 {
  $Bot = "Other Crawler";
 }
 if (strpos($agent,"googlebot")>-)
 {
  $Bot = "Google";
 }   
 if (strpos($agent,"mediapartners-google")>-)
 {
  $Bot = "Google Adsense";
 }
 if (strpos($agent,"baiduspider")>-)
 {
  $Bot = "Baidu";
 }
 if (strpos($agent,"sogou spider")>-)
 {
  $Bot = "Sogou";
 }
 if (strpos($agent,"yahoo")>-)
 {
  $Bot = "Yahoo!";
 }
 if (strpos($agent,"msn")>-)
 {
  $Bot = "MSN";
 }
 if (strpos($agent,"ia_archiver")>-)
 {
  $Bot = "Alexa";
 }
 if (strpos($agent,"iaarchiver")>-)
 {
  $Bot = "Alexa";
 }
 if (strpos($agent,"sohu")>-)
 {
  $Bot = "Sohu";
 }
 if (strpos($agent,"sqworm")>-)
 {
  $Bot = "AOL";
 }
 if (strpos($agent,"yodaoBot")>-)
 {
  $Bot = "Yodao";
 }
 if (strpos($agent,"iaskspider")>-)
 {
  $Bot = "Iask";
 }
 require("./dbinfo.php");
 date_default_timezone_set('PRC'); 
 $shijian=date("Y-m-d h:i:s", time());
 // 连接到 MySQL 服务器
 $connection = mysql_connect ($host, $username, $password);
 if (!$connection)
 {
  die('Not connected : ' . mysql_error());
 }
 // 设置活动的 MySQL 数据库
 $db_selected = mysql_select_db($database, $connection);
 if (!$db_selected)
 {
  die ('Can\'t use db : ' . mysql_error());
 }
 // 向数据库插入数据
 $query = "insert into crawler (crawler_category, crawler_date, crawler_url, crawler_IP) values ('$Bot','$shijian','$GetLocationURL','$serverip')";
 $result = mysql_query($query);
 if (!$result)
 {
  die('Invalid query: ' . mysql_error());
 }
&#63;>

成功了,现在访问数据库即可得知什么时候哪里的蜘蛛爬过你的什么页面。

view sourceprint&#63;
<&#63;php
include './robot.php';
include '../library/page.Class.php';
$page = $_GET['page'];
include '../library/conn_new.php';
$count = $mysql -> num_rows($mysql -> query("select * from crawler"));
$pages = new PageClass($count,,$_GET['page'],$_SERVER['PHP_SELF'].'&#63;page={page}');
$sql = "select * from crawler order by ";
$sql .= "crawler_date desc limit ".$pages -> page_limit.",".$pages -> myde_size;
$result = $mysql -> query($sql);
&#63;>
<table width="">
 <thead>
  <tr> 
   <td bgcolor="#CCFFFF"></td> 
   <td bgcolor="#CCFFFF" align="center" style="color:#">爬虫访问时间</td> 
   <td bgcolor="#CCFFFF" align="center" style="color:#">爬虫分类</td>
   <td bgcolor="#CCFFFF" align="center" style="color:#">爬虫IP</td>
   <td bgcolor="#CCFFFF" align="center" style="color:#">爬虫访问的URL</td>
  </tr>
 </thead>
<&#63;php
while($myrow = $mysql -> fetch_array($result)){
&#63;>
<tr>
 <td width=""><img  src="../images/topicnew.gif" / alt="PHP代码实现爬虫记录超管用,php代码爬虫" ></td>
 <td width="" style="font-family:Georgia"><&#63; echo $myrow["crawler_date"] &#63;></td>
 <td width="" style="color:#FA"><&#63; echo $myrow["crawler_category"] &#63;></td>
 <td width=""><&#63; echo $myrow["crawler_IP"] &#63;></td>
 <td width=""><&#63; echo $myrow["crawler_url"] &#63;></td>
</tr>
<&#63;php
 }
&#63;>
 </table>
<&#63;php
 echo $pages -> myde_write();
&#63;>

以上代码就是PHP代码实现爬虫记录——超管用的全部内容,希望对大家有所帮助。

Déclaration:
Le contenu de cet article est volontairement contribué par les internautes et les droits d'auteur appartiennent à l'auteur original. Ce site n'assume aucune responsabilité légale correspondante. Si vous trouvez un contenu suspecté de plagiat ou de contrefaçon, veuillez contacter admin@php.cn