Home > Article > Backend Development > Several ways to crawl pages with PHP_PHP tutorial
When we develop network programs, we often need to grab non-local files. Generally, we use PHP to simulate browser access, access the URL address through HTTP requests, and then get the HTML source code or XML data. We cannot output the data directly. , it is often necessary to extract the content and then format it to display it in a more friendly way.
Let’s briefly talk about several methods and principles of php crawling pages:
1. The main method of crawling pages with PHP:
1. file() function
2. file_get_contents() function
3. fopen()->fread()->fclose() mode
4.curl method
5. fsockopen() function socket mode
6. Use plug-ins (such as: http://sourceforge.net/projects/snoopy/)
2. The main ways for PHP to parse html or xml code:
1. file() function
? 1 2 3 4 5 6 7 8 9<?php
//定义url
$url
=
'http://t.qq.com'
;
//fiel函数读取内容数组
$lines_array
=file(
$url
);
//拆分数组为字符串
$lines_string
=implode(
''
,
$lines_array
);
//输出内容,嘿嘿,大家也可以保存在自己的服务器上
echo
$lines_string
;
2. file_get_contents() function
Using file_get_contents and fopen must enable allow_url_fopen. Method: Edit php.ini and set allow_url_fopen = On. When allow_url_fopen is turned off, neither fopen nor file_get_contents can open remote files.
<?php
//定义url
$url
=
'http://t.qq.com'
;
//file_get_contents函数远程读取数据
$lines_string
=
file_get_contents
(
$url
);
//输出内容,嘿嘿,大家也可以保存在自己的服务器上
echo
htmlspecialchars(
$lines_string
);
3. fopen()->fread()->fclose() mode ?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
4. curl方式
5. fsockopen() function socket mode
6. snoopy插件,最新版本是Snoopy-1.2.4.zip Last Update: 2013-05-30,推荐大家使用
使用网上非常流行的snoopy来进行采集,这是一个非常强大的采集插件,并且它的使用非常方便,你也可以在里面设置agent来模拟浏览器信息。
Note: The agent is set in line 45 of the Snoopy.class.php file. Please search for "var $agent" (the content in quotation marks) in the file. Browser content you can use PHP to get, <?php
//定义url
$url
=
'http://t.qq.com'
;
//fopen以二进制方式打开
$handle
=
fopen
(
$url
,
"rb"
);
//变量初始化
$lines_string
=
""
;
//循环读取数据
do
{
$data
=
fread
(
$handle
,1024);
if
(
strlen
(
$data
)==0) {
break
;
}
$lines_string
.=
$data
;
}
while
(true);
//关闭fopen句柄,释放资源
fclose(
$handle
);
//输出内容,嘿嘿,大家也可以保存在自己的服务器上
echo
$lines_string
;
使用curl必须空间开启curl。方法:windows下修改php.ini,将extension=php_curl.dll前面的分号去掉,而且需 要拷贝ssleay32.dll和libeay32.dll到C:WINDOWSsystem32下;Linux下要安装curl扩展。<?php
// 创建一个新cURL资源
$url
=
'http://t.qq.com'
;
$ch
=curl_init();
$timeout
=5;
// 设置URL和相应的选项
curl_setopt(
$ch
, CURLOPT_URL,
$url
);
curl_setopt(
$ch
, CURLOPT_RETURNTRANSFER, 1);
curl_setopt(
$ch
, CURLOPT_CONNECTTIMEOUT,
$timeout
);
// 抓取URL
$lines_string
=curl_exec(
$ch
);
// 关闭cURL资源,并且释放系统资源
curl_close(
$ch
);
//输出内容,嘿嘿,大家也可以保存在自己的服务器上
echo
$lines_string
;
Whether the socket mode can be executed correctly is also related to the server settings. Specifically, you can use phpinfo to check which communication protocols are enabled on the server. <?php
$fp
=
fsockopen
(
"t.qq.com"
, 80,
$errno
,
$errstr
, 30);
if
(!
$fp
) {
echo
"$errstr ($errno)<br />n"
;
}
else
{
$out
=
"GET / HTTP/1.1rn"
;
$out
.=
"Host: t.qq.comrn"
;
$out
.=
"Connection: Closernrn"
;
fwrite(
$fp
,
$out
);
while
(!
feof
(
$fp
)) {
echo
fgets
(
$fp
, 128);
}
fclose(
$fp
);
}
<?php
//引入snoopy的类文件
require
(
'Snoopy.class.php'
);
//初始化snoopy类
$snoopy
=
new
Snoopy;
$url
=
"http://t.qq.com"
;
//开始采集内容
$snoopy
->fetch(
$url
);
//保存采集内容到$lines_string
$lines_string
=
$snoopy
->results;
//输出内容,嘿嘿,大家也可以保存在自己的服务器上
echo
$lines_string
;
Use echo $_SERVER['HTTP_USER_AGENT']; to get browser information, and just copy the echo content into the agent.