Home > Article > Backend Development > PHP function crawler function
With the development of the Internet, web crawlers have become an important method of data collection. As a language widely used in web development, PHP language has built-in functions that are also very suitable for crawler development. This article will introduce several common PHP functions and demonstrate how to use these functions to write a basic crawler function.
1. file_get_contents function
The file_get_contents function is used to read file contents and can receive local files or URLs, so we can use it to obtain page data on the Internet. Since it requires no configuration parameters, it is easy to use. The following code demonstrates how to use the file_get_contents function to obtain the HTML content of a web page:
$url = 'http://example.com'; $html = file_get_contents($url); echo $html;
2. preg_match function
The preg_match function is a regular expression function built into PHP, which can be used to determine a Whether the string matches a pattern. Since most web page information is presented in HTML format, we can use regular expressions to extract the required content. The following code demonstrates how to use the preg_match function to extract all links from HTML:
$url = 'http://example.com'; $html = file_get_contents($url); preg_match_all('/<as+href=['"]([^'"]+)['"]/i', $html, $matches); print_r($matches[1]);
In the above code, the regular expression /<as href=['"]([^'"] )[ '"]/i
is used to match all a tags with href attributes to extract links.
3. curl function
The curl function is a function widely used in network programming PHP extension that can be used to send requests to a specific URL and get a response. It supports many protocols, including HTTP, FTP, SMTP, etc., and can also set request headers, request parameters, etc. The following code demonstrates how to use the curl function to obtain a certain web page HTML content:
$url = 'http://example.com'; $ch = curl_init(); // 初始化curl curl_setopt($ch, CURLOPT_URL, $url); // 设置请求URL curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // 设置不直接输出响应 $html = curl_exec($ch); // 发送请求并获取响应 curl_close($ch); // 关闭curl echo $html;
4. Implementation of simple crawler function
Based on the above function, we can easily write a simple crawler function to obtain relevant information of a certain web page. The following code demonstrates how to use the above three functions to implement a crawler function that obtains the page title and all links:
function spider($url) { $html = file_get_contents($url); // 获取页面HTML preg_match('/<title>([^<]+)</title>/', $html, $title); // 提取页面标题 preg_match_all('/<as+href=['"]([^'"]+)['"]/i', $html, $links); // 提取所有链接 $result = array('title' => $title[1], 'links' => $links[1]); // 构造输出结果 return $result; } $url = 'http://example.com'; $result = spider($url); print_r($result);
In the above code, we define a function named spider, which contains three steps: Get Page HTML, extract page title, extract page link. Finally, this function outputs the result in the form of an associative array. Run this function and pass in a URL to get the title and all links of the web page.
To sum up, using some of the built-in functions of PHP, we can easily write a basic crawler function to obtain information on the Internet. In actual development, we also need to consider anti-crawler strategies, data storage and other issues , to ensure the stability and reliability of the crawler.
The above is the detailed content of PHP function crawler function. For more information, please follow other related articles on the PHP Chinese website!