search
HomeBackend DevelopmentPHP TutorialPHP crawler practice: extract required data from Baidu search results

PHP crawler practice: extract required data from Baidu search results

Jun 13, 2023 am 10:22 AM
phpreptileData Extraction

With the rapid development of the Internet, the era of information explosion has arrived. In an era like this, search engines have become our main tool for obtaining information, and the massive amounts of data provided by these search engines are beyond our imagination. However, for researchers or data analysts in some specific fields, the information they need may only be a small part of the data in these search results. In this case, we need to use a crawler to get exactly the data we want.

In this article, we will use PHP to write a simple crawler program to extract the data we need from Baidu search results. The core of this program is to use PHP's cURL library to simulate HTTP requests, and then use regular expressions and other methods to parse the HTML page.

Ideas

Before we start writing the crawler program, we need to clarify a few questions:

  1. Goal: We want to crawl from the Baidu search results page What data?
  2. URL: Which URL do we need to get the data?
  3. Data format: What is the format of the data on Baidu search results page?

When thinking about what data we need to obtain, let’s take the keyword “PHP crawler” as an example. If we search this keyword on Baidu, we can see the following information:

  • Total number of search results
  • Title of each search result
  • each Description of each search result
  • The URL of each search result

Then, we can define our goal as extracting the title of each result from Baidu search results, Description and URL.

The first step to obtain data is to clarify the URL we want to obtain. In our example, the URL we need to get is this: https://www.baidu.com/s?wd=php crawler. By typing "php crawler" into the Baidu search bar, we can automatically jump to this URL.

Next, we need to understand the format of the data we are going to parse. In our case, the search results exist in the form of HTML code similar to the following:

<div class="result c-container ">
    <h3 class="t">
        <a href="http://www.example.com/" target="_blank" class="c-showurl">
            www.example.com
        </a>
        <em>PHP</em> 爬虫是什么? - PHP 入门教程 - 极客学院
    </h3>
    <div class="c-abstract">
        <span class=" newTimeFactor_before_abs">2天前 - </span>
        <em>PHP</em> 爬虫是一种方便快捷的数据采集方式 ... 目前的爬虫主要是通过<a
            href="https://www.baidu.com/s?wd=python%20爬虫&rsp=1&f=8&ie=utf-8&tn=95754739_hao_pg"
            target="_blank" class="text-underline">python 爬虫</a>实现。相比于 <a
            href="https://www.baidu.com/link?url=zdiwLoE_LR5bzae8ifgYsYXBfvatKGD0D6Yjli9c8_nsisbDmnS-r8l7g-5G2NI79x6yO8NnDdnLqhNuqOZtedHjiOZbhsDNwkFx3pW6yBt&wd=&eqid=f774f5d00003a46c000000065f51fc9a"
            target="_blank" class="text-underline">PHP</a>,<a
            href="https://www.baidu.com/link?url=zdiwLoE_LR5bzae8ifgYsYXBfvatKGD0D6Yjli9c8_ns
            isbDmnS-r8l7g-5G2NI79x6yO8NnDdnLqhNuqOZtedHjiOZbhsDNwkFx3pW6yBt&
            wd=&eqid=f774f5d00003a46c000000065f51fc9a" target="_blank"
            class="text-underline">PHP</a> 一般用作...
    </div>
</div>

In the above HTML code snippet, you can see that each search result is nested within a <div class="result c-container "> within the tag. Each search result has a title, which corresponds to the HTML format <code><h3 class="t"></h3>, where the link address is nested within the <a></a> tag. Each search result has a description, corresponding to the HTML format <div class="c-abstract">. Each search result also has a URL containing <code>class="c-showurl" within the <a></a> tag.

Now that we have clarified the format of the data we want to obtain and the format of the HTML data we need to parse, we can start writing our crawler program.

Writing code

We divided our PHP crawler code into three steps:

  1. Get the HTML page of Baidu search results
  2. Analysis HTML page
  3. Return the parsed data in the form of an array

Get the HTML page of Baidu search results

We can use PHP's cURL library to send HTTP requests, To obtain the HTML page of Baidu search results. In this example, we store the URL of the search page in the $url variable. Then create a cURL handle and set many options, such as: set URL, set request header, set proxy, set timeout, set request method to GET, and finally execute this handle to obtain the HTML page.

<?php

$url = "https://www.baidu.com/s?wd=php%20爬虫";

// 创建curl句柄
$ch = curl_init();

// 设置curl选项
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_ENCODING, 'gzip,deflate');
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt(
    $ch,
    CURLOPT_HTTPHEADER,
    [
        'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
        'Referer: https://www.baidu.com/',
        'Connection: keep-alive',
    ]
);
curl_setopt($ch, CURLOPT_TIMEOUT, 30);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "GET");

//执行curl句柄
$result = curl_exec($ch);

In this example, we use many of the options provided by the cURL library. For example, set the request header to simulate the HTTP request sent by the browser, set the request method to GET, set the timeout, etc.

Parse HTML page

After obtaining the HTML page of Baidu search results, we need to parse it to obtain the information we need. In this example, we will use PHP's regular expressions to parse an HTML page.

The following is the regular expression we use to extract the title, description and link from the HTML page:

<?php

$result = curl_exec($ch);

// 匹配所有搜索结果
preg_match_all(
    '/<div.*?class="result.*?">.*?<h3 id="a-href-s-s-a">.*?<a.*?href="(.*?)".*?>s*(.*?)s*</a>.*?</h3>.*?<div.*?class="c-abstract.*?">(.*?)</div>.*?</div>/',
    $result,
    $matches
);

// 提取搜索结果中的标题、描述和链接
$data = [];
for ($i=0; $i<count($matches[0]); $i++) {
    $data[] = [
        'title' => strip_tags($matches[2][$i]), // 去除标题中的 HTML 标签
        'description' => strip_tags($matches[3][$i]), // 去除描述中的 HTML 标签
        'link' => $matches[1][$i]
    ];
};

// 关闭curl句柄
curl_close($ch);

In the above code, we use PHP's regular expression to match all searches result. We then use a loop to go through all the search results and extract the titles, descriptions and links we need. Since the title and description we get from HTML will contain HTML tags, we use the strip_tags function to remove them.

Return the results

In the above code, we have obtained the data we need, and now we only need to return the results in the form of an array. We encapsulate our entire crawler program into a function, and return the obtained data in the form of an array:

<?php

function spider_baidu($keyword) {
    $url = "https://www.baidu.com/s?wd=" . urlencode($keyword);

    $ch = curl_init();

    curl_setopt($ch, CURLOPT_URL, $url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    curl_setopt($ch, CURLOPT_ENCODING, 'gzip,deflate');
    curl_setopt($ch, CURLOPT_HEADER, true);
    curl_setopt(
        $ch,
        CURLOPT_HTTPHEADER,
        [
            'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
            'Referer: https://www.baidu.com/',
            'Connection: keep-alive',
        ]
    );
    curl_setopt($ch, CURLOPT_TIMEOUT, 30);
    curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
    curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
    curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "GET");

    $result = curl_exec($ch);

    preg_match_all(
        '/<div.*?class="result.*?">.*?<h3 id="a-href-s-s-a">.*?<a.*?href="(.*?)".*?>s*(.*?)s*</a>.*?</h3>.*?<div.*?class="c-abstract.*?">(.*?)</div>.*?</div>/',
        $result,
        $matches
    );

    $data = [];
    for ($i=0; $i<count($matches[0]); $i++) {
        $data[] = [
            'title' => strip_tags($matches[2][$i]),
            'description' => strip_tags($matches[3][$i]),
            'link' => $matches[1][$i]
        ];
    };

    curl_close($ch);

    return $data;
}

We can receive a keyword as a parameter, and then call this function to obtain the keyword in Titles, descriptions and links in Baidu search results.

Conclusion

In this article, we wrote a simple crawler program using PHP to extract the required data from Baidu search results. This program uses PHP's cURL library to simulate HTTP requests and uses methods such as regular expressions to parse HTML pages. Through this example, we can gain an in-depth understanding of how crawlers work and how to write crawlers using PHP. In actual projects, we can modify this program according to our needs to obtain the data we need.

The above is the detailed content of PHP crawler practice: extract required data from Baidu search results. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
How can you check if a PHP session has already started?How can you check if a PHP session has already started?Apr 30, 2025 am 12:20 AM

In PHP, you can use session_status() or session_id() to check whether the session has started. 1) Use the session_status() function. If PHP_SESSION_ACTIVE is returned, the session has been started. 2) Use the session_id() function, if a non-empty string is returned, the session has been started. Both methods can effectively check the session state, and choosing which method to use depends on the PHP version and personal preferences.

Describe a scenario where using sessions is essential in a web application.Describe a scenario where using sessions is essential in a web application.Apr 30, 2025 am 12:16 AM

Sessionsarevitalinwebapplications,especiallyfore-commerceplatforms.Theymaintainuserdataacrossrequests,crucialforshoppingcarts,authentication,andpersonalization.InFlask,sessionscanbeimplementedusingsimplecodetomanageuserloginsanddatapersistence.

How can you manage concurrent session access in PHP?How can you manage concurrent session access in PHP?Apr 30, 2025 am 12:11 AM

Managing concurrent session access in PHP can be done by the following methods: 1. Use the database to store session data, 2. Use Redis or Memcached, 3. Implement a session locking strategy. These methods help ensure data consistency and improve concurrency performance.

What are the limitations of using PHP sessions?What are the limitations of using PHP sessions?Apr 30, 2025 am 12:04 AM

PHPsessionshaveseverallimitations:1)Storageconstraintscanleadtoperformanceissues;2)Securityvulnerabilitieslikesessionfixationattacksexist;3)Scalabilityischallengingduetoserver-specificstorage;4)Sessionexpirationmanagementcanbeproblematic;5)Datapersis

Explain how load balancing affects session management and how to address it.Explain how load balancing affects session management and how to address it.Apr 29, 2025 am 12:42 AM

Load balancing affects session management, but can be resolved with session replication, session stickiness, and centralized session storage. 1. Session Replication Copy session data between servers. 2. Session stickiness directs user requests to the same server. 3. Centralized session storage uses independent servers such as Redis to store session data to ensure data sharing.

Explain the concept of session locking.Explain the concept of session locking.Apr 29, 2025 am 12:39 AM

Sessionlockingisatechniqueusedtoensureauser'ssessionremainsexclusivetooneuseratatime.Itiscrucialforpreventingdatacorruptionandsecuritybreachesinmulti-userapplications.Sessionlockingisimplementedusingserver-sidelockingmechanisms,suchasReentrantLockinJ

Are there any alternatives to PHP sessions?Are there any alternatives to PHP sessions?Apr 29, 2025 am 12:36 AM

Alternatives to PHP sessions include Cookies, Token-based Authentication, Database-based Sessions, and Redis/Memcached. 1.Cookies manage sessions by storing data on the client, which is simple but low in security. 2.Token-based Authentication uses tokens to verify users, which is highly secure but requires additional logic. 3.Database-basedSessions stores data in the database, which has good scalability but may affect performance. 4. Redis/Memcached uses distributed cache to improve performance and scalability, but requires additional matching

Define the term 'session hijacking' in the context of PHP.Define the term 'session hijacking' in the context of PHP.Apr 29, 2025 am 12:33 AM

Sessionhijacking refers to an attacker impersonating a user by obtaining the user's sessionID. Prevention methods include: 1) encrypting communication using HTTPS; 2) verifying the source of the sessionID; 3) using a secure sessionID generation algorithm; 4) regularly updating the sessionID.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function