search
HomeBackend DevelopmentPHP TutorialPractical sharing of using Swoole to asynchronously crawl web pages

php programmers all know that programs written in php are all synchronous. How to write an asynchronous program in php? The answer is Swoole. Here we take grabbing web content as an example to show how to use Swoole to write asynchronous programs.

php synchronization program

Before writing an asynchronous program, don’t worry, first use php to implement the synchronization program.

<?php
/**
 * Class Crawler
 * Path: /Sync/Crawler.php
 */
class Crawler
{
    private $url;
    private $toVisit = [];
    public function __construct($url)
    {
        $this->url = $url;
    }
    public function visitOneDegree()
    {
        $this->loadPageUrls();
        $this->visitAll();
    }
    private function loadPageUrls()
    {
        $content = $this->visit($this->url);
        $pattern = &#39;#((http|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\&#39;:<]|\.\s|$)#i&#39;;
        preg_match_all($pattern, $content, $matched);
        foreach ($matched[0] as $url) {
            if (in_array($url, $this->toVisit)) {
                continue;
            }
            $this->toVisit[] = $url;
        }
    }
    private function visitAll()
    {
        foreach ($this->toVisit as $url) {
            $this->visit($url);
        }
    }
    private function visit($url)
    {
        return @file_get_contents($url);
    }
}
<?php
/**
 * crawler.php
 */
require_once &#39;Sync/Crawler.php&#39;;
$start = microtime(true);
$url = &#39;http://www.swoole.com/&#39;;
$ins = new Crawler($url);
$ins->visitOneDegree();
$timeUsed = microtime(true) - $start;
echo "time used: " . $timeUsed;
/* output:
time used: 6.2610177993774
*/

A preliminary study on Swoole's implementation of asynchronous crawlers

First refer to the official asynchronous crawling page.
Usage example

Swoole\Async::dnsLookup("www.baidu.com", function ($domainName, $ip) {
    $cli = new swoole_http_client($ip, 80);
    $cli->setHeaders([
        &#39;Host&#39; => $domainName,
        "User-Agent" => &#39;Chrome/49.0.2587.3&#39;,
        &#39;Accept&#39; => &#39;text/html,application/xhtml+xml,application/xml&#39;,
        &#39;Accept-Encoding&#39; => &#39;gzip&#39;,
    ]);
    $cli->get(&#39;/index.html&#39;, function ($cli) {
        echo "Length: " . strlen($cli->body) . "\n";
        echo $cli->body;
    });
});

It seems that by slightly modifying the synchronous file_get_contents code, asynchronous implementation can be achieved. It seems that success is easy.
So, we got the following code:

<?php
/**
 * Class Crawler
 * Path: /Async/CrawlerV1.php
 */
class Crawler
{
    private $url;
    private $toVisit = [];
    private $loaded = false;
    public function __construct($url)
    {
        $this->url = $url;
    }
    public function visitOneDegree()
    {
        $this->visit($this->url, true);
        $retryCount = 3;
        do {
            sleep(1);
            $retryCount--;
        } while ($retryCount > 0 && $this->loaded == false);
        $this->visitAll();
    }
    private function loadPage($content)
    {
        $pattern = &#39;#((http|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\&#39;:<]|\.\s|$)#i&#39;;
        preg_match_all($pattern, $content, $matched);
        foreach ($matched[0] as $url) {
            if (in_array($url, $this->toVisit)) {
                continue;
            }
            $this->toVisit[] = $url;
        }
    }
    private function visitAll()
    {
        foreach ($this->toVisit as $url) {
            $this->visit($url);
        }
    }
    private function visit($url, $root = false)
    {
        $urlInfo = parse_url($url);
        Swoole\Async::dnsLookup($urlInfo[&#39;host&#39;], function ($domainName, $ip) use($urlInfo, $root) {
            $cli = new swoole_http_client($ip, 80);
            $cli->setHeaders([
                &#39;Host&#39; => $domainName,
                "User-Agent" => &#39;Chrome/49.0.2587.3&#39;,
                &#39;Accept&#39; => &#39;text/html,application/xhtml+xml,application/xml&#39;,
                &#39;Accept-Encoding&#39; => &#39;gzip&#39;,
            ]);
            $cli->get($urlInfo[&#39;path&#39;], function ($cli) use ($root) {
                if ($root) {
                    $this->loadPage($cli->body);
                    $this->loaded = true;
                }
            });
        });
    }
}
<?php
/**
 * crawler.php
 */
require_once &#39;Async/CrawlerV1.php&#39;;
$start = microtime(true);
$url = &#39;http://www.swoole.com/&#39;;
$ins = new Crawler($url);
$ins->visitOneDegree();
$timeUsed = microtime(true) - $start;
echo "time used: " . $timeUsed;
/* output:
time used: 3.011773109436
*/

The result ran for 3 seconds. Pay attention to my implementation. After initiating a request to crawl the home page, I will poll for the results every second, and it will end after polling three times. The 3 seconds here seem to be the exit caused by polling 3 times without results.
It seems that I was too impatient and did not give them enough preparation time. Okay, let's change the number of polls to 10 and see the results.

time used: 10.034232854843

You know how I feel at this time.

Is it a performance problem with swoole? Why is there no result after 10 seconds? Is it because my posture is wrong? The old man Marx said: "Practice is the only criterion for testing truth." It seems that we need to debug it to find out the reason.

So, I added breakpoints at

$this->visitAll();

and

$this->loadPage($cli->body);

. Finally, I found that visitAll() is always executed first, and then loadPage() is executed.

After thinking about it for a while, I probably understand the reason. what is the reason behind the scene?

The asynchronous dynamic model I expect is like this:

Practical sharing of using Swoole to asynchronously crawl web pages

However, the real scene is not like this. Through debugging, I roughly understand that the actual model should be like this:


Practical sharing of using Swoole to asynchronously crawl web pages

In other words, no matter how I increase the number of retries , the data will never be ready. The data will only start executing after the current function is ready. The asynchronous here only reduces the time to prepare the connection.
Then the question is, how can I make the program perform the functions I expect after preparing the data.
Let’s first take a look at how Swoole’s official code for executing asynchronous tasks is written

$serv = new swoole_server("127.0.0.1", 9501);
//设置异步任务的工作进程数量
$serv->set(array(&#39;task_worker_num&#39; => 4));
$serv->on(&#39;receive&#39;, function($serv, $fd, $from_id, $data) {
    //投递异步任务
    $task_id = $serv->task($data);
    echo "Dispath AsyncTask: id=$task_id\n";
});
//处理异步任务
$serv->on(&#39;task&#39;, function ($serv, $task_id, $from_id, $data) {
    echo "New AsyncTask[id=$task_id]".PHP_EOL;
    //返回任务执行的结果
    $serv->finish("$data -> OK");
});
//处理异步任务的结果
$serv->on(&#39;finish&#39;, function ($serv, $task_id, $data) {
    echo "AsyncTask[$task_id] Finish: $data".PHP_EOL;
});
$serv->start();

It can be seen that the official pass the subsequent execution logic through a function anonymous function. Seen this way, things become much simpler.

url = $url;
    }
    public function visitOneDegree()
    {
        $this->visit($this->url, function ($content) {
            $this->loadPage($content);
            $this->visitAll();
        });
    }
    private function loadPage($content)
    {
        $pattern = '#((http|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\':<]|\.\s|$)#i';
        preg_match_all($pattern, $content, $matched);
        foreach ($matched[0] as $url) {
            if (in_array($url, $this->toVisit)) {
                continue;
            }
            $this->toVisit[] = $url;
        }
    }
    private function visitAll()
    {
        foreach ($this->toVisit as $url) {
            $this->visit($url);
        }
    }
    private function visit($url, $callBack = null)
    {
        $urlInfo = parse_url($url);
        Swoole\Async::dnsLookup($urlInfo['host'], function ($domainName, $ip) use($urlInfo, $callBack) {
            if (!$ip) {
                return;
            }
            $cli = new swoole_http_client($ip, 80);
            $cli->setHeaders([
                'Host' => $domainName,
                "User-Agent" => 'Chrome/49.0.2587.3',
                'Accept' => 'text/html,application/xhtml+xml,application/xml',
                'Accept-Encoding' => 'gzip',
            ]);
            $cli->get($urlInfo['path'], function ($cli) use ($callBack) {
                if ($callBack) {
                    call_user_func($callBack, $cli->body);
                }
                $cli->close();
            });
        });
    }
}

After reading this code, I feel like I have seen it before. In nodejs development, the callbacks that can be seen everywhere have their own reasons. Now I suddenly understand that callback exists to solve asynchronous problems.
I executed the program, and it only took 0.0007s, and it was over before it even started! Can asynchronous efficiency really be improved so much? The answer is of course no, there is something wrong with our code.
Due to the use of asynchronous, the logic of calculating the end time has been executed without waiting for the task to be completely completed. It seems it's time to use callback again.

/**
Async/Crawler.php
**/
    public function visitOneDegree($callBack)
    {
        $this->visit($this->url, function ($content) use($callBack) {
            $this->loadPage($content);
            $this->visitAll();
            call_user_func($callBack);
        });
    }
<?php
/**
 * crawler.php
 */
require_once &#39;Async/Crawler.php&#39;;
$start = microtime(true);
$url = &#39;http://www.swoole.com/&#39;;
$ins = new Crawler($url);
$ins->visitOneDegree(function () use($start) {
    $timeUsed = microtime(true) - $start;
    echo "time used: " . $timeUsed;
});
/*output:
time used: 0.068463802337646
*/

Looking at it now, the results are much more credible.
Let us compare the difference between synchronous and asynchronous. Synchronization takes 6.26s and asynchronous takes 0.068 seconds, which is a full 6.192s difference. No, to put it more accurately, it should be nearly 10 times worse!
Of course, in terms of efficiency, asynchronous code is much higher than synchronous code, but logically speaking, asynchronous logic is more convoluted than synchronous, and the code will bring a lot of callbacks, which is not easy to understand.
Swoole official has a description about the choice of asynchronous and synchronous, which is very pertinent. I would like to share it with you:

我们不赞成用异步回调的方式去做功能开发,传统的PHP同步方式实现功能和逻辑是最简单的,也是最佳的方案。像node.js这样到处callback,只是牺牲可维护性和开发效率。

Related reading:

How to install swoole extension with php7

Swoole development key points introduction

php asynchronous Multi-threaded swoole usage examples

The above is all the content of this article. If students have any questions, they can discuss it in the comment area below~

The above is the detailed content of Practical sharing of using Swoole to asynchronously crawl web pages. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
PHP in Action: Real-World Examples and ApplicationsPHP in Action: Real-World Examples and ApplicationsApr 14, 2025 am 12:19 AM

PHP is widely used in e-commerce, content management systems and API development. 1) E-commerce: used for shopping cart function and payment processing. 2) Content management system: used for dynamic content generation and user management. 3) API development: used for RESTful API development and API security. Through performance optimization and best practices, the efficiency and maintainability of PHP applications are improved.

PHP: Creating Interactive Web Content with EasePHP: Creating Interactive Web Content with EaseApr 14, 2025 am 12:15 AM

PHP makes it easy to create interactive web content. 1) Dynamically generate content by embedding HTML and display it in real time based on user input or database data. 2) Process form submission and generate dynamic output to ensure that htmlspecialchars is used to prevent XSS. 3) Use MySQL to create a user registration system, and use password_hash and preprocessing statements to enhance security. Mastering these techniques will improve the efficiency of web development.

PHP and Python: Comparing Two Popular Programming LanguagesPHP and Python: Comparing Two Popular Programming LanguagesApr 14, 2025 am 12:13 AM

PHP and Python each have their own advantages, and choose according to project requirements. 1.PHP is suitable for web development, especially for rapid development and maintenance of websites. 2. Python is suitable for data science, machine learning and artificial intelligence, with concise syntax and suitable for beginners.

The Enduring Relevance of PHP: Is It Still Alive?The Enduring Relevance of PHP: Is It Still Alive?Apr 14, 2025 am 12:12 AM

PHP is still dynamic and still occupies an important position in the field of modern programming. 1) PHP's simplicity and powerful community support make it widely used in web development; 2) Its flexibility and stability make it outstanding in handling web forms, database operations and file processing; 3) PHP is constantly evolving and optimizing, suitable for beginners and experienced developers.

PHP's Current Status: A Look at Web Development TrendsPHP's Current Status: A Look at Web Development TrendsApr 13, 2025 am 12:20 AM

PHP remains important in modern web development, especially in content management and e-commerce platforms. 1) PHP has a rich ecosystem and strong framework support, such as Laravel and Symfony. 2) Performance optimization can be achieved through OPcache and Nginx. 3) PHP8.0 introduces JIT compiler to improve performance. 4) Cloud-native applications are deployed through Docker and Kubernetes to improve flexibility and scalability.

PHP vs. Other Languages: A ComparisonPHP vs. Other Languages: A ComparisonApr 13, 2025 am 12:19 AM

PHP is suitable for web development, especially in rapid development and processing dynamic content, but is not good at data science and enterprise-level applications. Compared with Python, PHP has more advantages in web development, but is not as good as Python in the field of data science; compared with Java, PHP performs worse in enterprise-level applications, but is more flexible in web development; compared with JavaScript, PHP is more concise in back-end development, but is not as good as JavaScript in front-end development.

PHP vs. Python: Core Features and FunctionalityPHP vs. Python: Core Features and FunctionalityApr 13, 2025 am 12:16 AM

PHP and Python each have their own advantages and are suitable for different scenarios. 1.PHP is suitable for web development and provides built-in web servers and rich function libraries. 2. Python is suitable for data science and machine learning, with concise syntax and a powerful standard library. When choosing, it should be decided based on project requirements.

PHP: A Key Language for Web DevelopmentPHP: A Key Language for Web DevelopmentApr 13, 2025 am 12:08 AM

PHP is a scripting language widely used on the server side, especially suitable for web development. 1.PHP can embed HTML, process HTTP requests and responses, and supports a variety of databases. 2.PHP is used to generate dynamic web content, process form data, access databases, etc., with strong community support and open source resources. 3. PHP is an interpreted language, and the execution process includes lexical analysis, grammatical analysis, compilation and execution. 4.PHP can be combined with MySQL for advanced applications such as user registration systems. 5. When debugging PHP, you can use functions such as error_reporting() and var_dump(). 6. Optimize PHP code to use caching mechanisms, optimize database queries and use built-in functions. 7

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.