search
HomeBackend DevelopmentPHP TutorialPractical sharing of using Swoole to asynchronously crawl web pages

php programmers all know that programs written in php are all synchronous. How to write an asynchronous program in php? The answer is Swoole. Here we take grabbing web content as an example to show how to use Swoole to write asynchronous programs.

php synchronization program

Before writing an asynchronous program, don’t worry, first use php to implement the synchronization program.

<?php
/**
 * Class Crawler
 * Path: /Sync/Crawler.php
 */
class Crawler
{
    private $url;
    private $toVisit = [];
    public function __construct($url)
    {
        $this->url = $url;
    }
    public function visitOneDegree()
    {
        $this->loadPageUrls();
        $this->visitAll();
    }
    private function loadPageUrls()
    {
        $content = $this->visit($this->url);
        $pattern = &#39;#((http|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\&#39;:<]|\.\s|$)#i&#39;;
        preg_match_all($pattern, $content, $matched);
        foreach ($matched[0] as $url) {
            if (in_array($url, $this->toVisit)) {
                continue;
            }
            $this->toVisit[] = $url;
        }
    }
    private function visitAll()
    {
        foreach ($this->toVisit as $url) {
            $this->visit($url);
        }
    }
    private function visit($url)
    {
        return @file_get_contents($url);
    }
}
<?php
/**
 * crawler.php
 */
require_once &#39;Sync/Crawler.php&#39;;
$start = microtime(true);
$url = &#39;http://www.swoole.com/&#39;;
$ins = new Crawler($url);
$ins->visitOneDegree();
$timeUsed = microtime(true) - $start;
echo "time used: " . $timeUsed;
/* output:
time used: 6.2610177993774
*/

A preliminary study on Swoole's implementation of asynchronous crawlers

First refer to the official asynchronous crawling page.
Usage example

Swoole\Async::dnsLookup("www.baidu.com", function ($domainName, $ip) {
    $cli = new swoole_http_client($ip, 80);
    $cli->setHeaders([
        &#39;Host&#39; => $domainName,
        "User-Agent" => &#39;Chrome/49.0.2587.3&#39;,
        &#39;Accept&#39; => &#39;text/html,application/xhtml+xml,application/xml&#39;,
        &#39;Accept-Encoding&#39; => &#39;gzip&#39;,
    ]);
    $cli->get(&#39;/index.html&#39;, function ($cli) {
        echo "Length: " . strlen($cli->body) . "\n";
        echo $cli->body;
    });
});

It seems that by slightly modifying the synchronous file_get_contents code, asynchronous implementation can be achieved. It seems that success is easy.
So, we got the following code:

<?php
/**
 * Class Crawler
 * Path: /Async/CrawlerV1.php
 */
class Crawler
{
    private $url;
    private $toVisit = [];
    private $loaded = false;
    public function __construct($url)
    {
        $this->url = $url;
    }
    public function visitOneDegree()
    {
        $this->visit($this->url, true);
        $retryCount = 3;
        do {
            sleep(1);
            $retryCount--;
        } while ($retryCount > 0 && $this->loaded == false);
        $this->visitAll();
    }
    private function loadPage($content)
    {
        $pattern = &#39;#((http|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\&#39;:<]|\.\s|$)#i&#39;;
        preg_match_all($pattern, $content, $matched);
        foreach ($matched[0] as $url) {
            if (in_array($url, $this->toVisit)) {
                continue;
            }
            $this->toVisit[] = $url;
        }
    }
    private function visitAll()
    {
        foreach ($this->toVisit as $url) {
            $this->visit($url);
        }
    }
    private function visit($url, $root = false)
    {
        $urlInfo = parse_url($url);
        Swoole\Async::dnsLookup($urlInfo[&#39;host&#39;], function ($domainName, $ip) use($urlInfo, $root) {
            $cli = new swoole_http_client($ip, 80);
            $cli->setHeaders([
                &#39;Host&#39; => $domainName,
                "User-Agent" => &#39;Chrome/49.0.2587.3&#39;,
                &#39;Accept&#39; => &#39;text/html,application/xhtml+xml,application/xml&#39;,
                &#39;Accept-Encoding&#39; => &#39;gzip&#39;,
            ]);
            $cli->get($urlInfo[&#39;path&#39;], function ($cli) use ($root) {
                if ($root) {
                    $this->loadPage($cli->body);
                    $this->loaded = true;
                }
            });
        });
    }
}
<?php
/**
 * crawler.php
 */
require_once &#39;Async/CrawlerV1.php&#39;;
$start = microtime(true);
$url = &#39;http://www.swoole.com/&#39;;
$ins = new Crawler($url);
$ins->visitOneDegree();
$timeUsed = microtime(true) - $start;
echo "time used: " . $timeUsed;
/* output:
time used: 3.011773109436
*/

The result ran for 3 seconds. Pay attention to my implementation. After initiating a request to crawl the home page, I will poll for the results every second, and it will end after polling three times. The 3 seconds here seem to be the exit caused by polling 3 times without results.
It seems that I was too impatient and did not give them enough preparation time. Okay, let's change the number of polls to 10 and see the results.

time used: 10.034232854843

You know how I feel at this time.

Is it a performance problem with swoole? Why is there no result after 10 seconds? Is it because my posture is wrong? The old man Marx said: "Practice is the only criterion for testing truth." It seems that we need to debug it to find out the reason.

So, I added breakpoints at

$this->visitAll();

and

$this->loadPage($cli->body);

. Finally, I found that visitAll() is always executed first, and then loadPage() is executed.

After thinking about it for a while, I probably understand the reason. what is the reason behind the scene?

The asynchronous dynamic model I expect is like this:

Practical sharing of using Swoole to asynchronously crawl web pages

However, the real scene is not like this. Through debugging, I roughly understand that the actual model should be like this:


Practical sharing of using Swoole to asynchronously crawl web pages

In other words, no matter how I increase the number of retries , the data will never be ready. The data will only start executing after the current function is ready. The asynchronous here only reduces the time to prepare the connection.
Then the question is, how can I make the program perform the functions I expect after preparing the data.
Let’s first take a look at how Swoole’s official code for executing asynchronous tasks is written

$serv = new swoole_server("127.0.0.1", 9501);
//设置异步任务的工作进程数量
$serv->set(array(&#39;task_worker_num&#39; => 4));
$serv->on(&#39;receive&#39;, function($serv, $fd, $from_id, $data) {
    //投递异步任务
    $task_id = $serv->task($data);
    echo "Dispath AsyncTask: id=$task_id\n";
});
//处理异步任务
$serv->on(&#39;task&#39;, function ($serv, $task_id, $from_id, $data) {
    echo "New AsyncTask[id=$task_id]".PHP_EOL;
    //返回任务执行的结果
    $serv->finish("$data -> OK");
});
//处理异步任务的结果
$serv->on(&#39;finish&#39;, function ($serv, $task_id, $data) {
    echo "AsyncTask[$task_id] Finish: $data".PHP_EOL;
});
$serv->start();

It can be seen that the official pass the subsequent execution logic through a function anonymous function. Seen this way, things become much simpler.

url = $url;
    }
    public function visitOneDegree()
    {
        $this->visit($this->url, function ($content) {
            $this->loadPage($content);
            $this->visitAll();
        });
    }
    private function loadPage($content)
    {
        $pattern = '#((http|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\':<]|\.\s|$)#i';
        preg_match_all($pattern, $content, $matched);
        foreach ($matched[0] as $url) {
            if (in_array($url, $this->toVisit)) {
                continue;
            }
            $this->toVisit[] = $url;
        }
    }
    private function visitAll()
    {
        foreach ($this->toVisit as $url) {
            $this->visit($url);
        }
    }
    private function visit($url, $callBack = null)
    {
        $urlInfo = parse_url($url);
        Swoole\Async::dnsLookup($urlInfo['host'], function ($domainName, $ip) use($urlInfo, $callBack) {
            if (!$ip) {
                return;
            }
            $cli = new swoole_http_client($ip, 80);
            $cli->setHeaders([
                'Host' => $domainName,
                "User-Agent" => 'Chrome/49.0.2587.3',
                'Accept' => 'text/html,application/xhtml+xml,application/xml',
                'Accept-Encoding' => 'gzip',
            ]);
            $cli->get($urlInfo['path'], function ($cli) use ($callBack) {
                if ($callBack) {
                    call_user_func($callBack, $cli->body);
                }
                $cli->close();
            });
        });
    }
}

After reading this code, I feel like I have seen it before. In nodejs development, the callbacks that can be seen everywhere have their own reasons. Now I suddenly understand that callback exists to solve asynchronous problems.
I executed the program, and it only took 0.0007s, and it was over before it even started! Can asynchronous efficiency really be improved so much? The answer is of course no, there is something wrong with our code.
Due to the use of asynchronous, the logic of calculating the end time has been executed without waiting for the task to be completely completed. It seems it's time to use callback again.

/**
Async/Crawler.php
**/
    public function visitOneDegree($callBack)
    {
        $this->visit($this->url, function ($content) use($callBack) {
            $this->loadPage($content);
            $this->visitAll();
            call_user_func($callBack);
        });
    }
<?php
/**
 * crawler.php
 */
require_once &#39;Async/Crawler.php&#39;;
$start = microtime(true);
$url = &#39;http://www.swoole.com/&#39;;
$ins = new Crawler($url);
$ins->visitOneDegree(function () use($start) {
    $timeUsed = microtime(true) - $start;
    echo "time used: " . $timeUsed;
});
/*output:
time used: 0.068463802337646
*/

Looking at it now, the results are much more credible.
Let us compare the difference between synchronous and asynchronous. Synchronization takes 6.26s and asynchronous takes 0.068 seconds, which is a full 6.192s difference. No, to put it more accurately, it should be nearly 10 times worse!
Of course, in terms of efficiency, asynchronous code is much higher than synchronous code, but logically speaking, asynchronous logic is more convoluted than synchronous, and the code will bring a lot of callbacks, which is not easy to understand.
Swoole official has a description about the choice of asynchronous and synchronous, which is very pertinent. I would like to share it with you:

我们不赞成用异步回调的方式去做功能开发,传统的PHP同步方式实现功能和逻辑是最简单的,也是最佳的方案。像node.js这样到处callback,只是牺牲可维护性和开发效率。

Related reading:

How to install swoole extension with php7

Swoole development key points introduction

php asynchronous Multi-threaded swoole usage examples

The above is all the content of this article. If students have any questions, they can discuss it in the comment area below~

The above is the detailed content of Practical sharing of using Swoole to asynchronously crawl web pages. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
What is the difference between unset() and session_destroy()?What is the difference between unset() and session_destroy()?May 04, 2025 am 12:19 AM

Thedifferencebetweenunset()andsession_destroy()isthatunset()clearsspecificsessionvariableswhilekeepingthesessionactive,whereassession_destroy()terminatestheentiresession.1)Useunset()toremovespecificsessionvariableswithoutaffectingthesession'soveralls

What is sticky sessions (session affinity) in the context of load balancing?What is sticky sessions (session affinity) in the context of load balancing?May 04, 2025 am 12:16 AM

Stickysessionsensureuserrequestsareroutedtothesameserverforsessiondataconsistency.1)SessionIdentificationassignsuserstoserversusingcookiesorURLmodifications.2)ConsistentRoutingdirectssubsequentrequeststothesameserver.3)LoadBalancingdistributesnewuser

What are the different session save handlers available in PHP?What are the different session save handlers available in PHP?May 04, 2025 am 12:14 AM

PHPoffersvarioussessionsavehandlers:1)Files:Default,simplebutmaybottleneckonhigh-trafficsites.2)Memcached:High-performance,idealforspeed-criticalapplications.3)Redis:SimilartoMemcached,withaddedpersistence.4)Databases:Offerscontrol,usefulforintegrati

What is a session in PHP, and why are they used?What is a session in PHP, and why are they used?May 04, 2025 am 12:12 AM

Session in PHP is a mechanism for saving user data on the server side to maintain state between multiple requests. Specifically, 1) the session is started by the session_start() function, and data is stored and read through the $_SESSION super global array; 2) the session data is stored in the server's temporary files by default, but can be optimized through database or memory storage; 3) the session can be used to realize user login status tracking and shopping cart management functions; 4) Pay attention to the secure transmission and performance optimization of the session to ensure the security and efficiency of the application.

Explain the lifecycle of a PHP session.Explain the lifecycle of a PHP session.May 04, 2025 am 12:04 AM

PHPsessionsstartwithsession_start(),whichgeneratesauniqueIDandcreatesaserverfile;theypersistacrossrequestsandcanbemanuallyendedwithsession_destroy().1)Sessionsbeginwhensession_start()iscalled,creatingauniqueIDandserverfile.2)Theycontinueasdataisloade

What is the difference between absolute and idle session timeouts?What is the difference between absolute and idle session timeouts?May 03, 2025 am 12:21 AM

Absolute session timeout starts at the time of session creation, while an idle session timeout starts at the time of user's no operation. Absolute session timeout is suitable for scenarios where strict control of the session life cycle is required, such as financial applications; idle session timeout is suitable for applications that want users to keep their session active for a long time, such as social media.

What steps would you take if sessions aren't working on your server?What steps would you take if sessions aren't working on your server?May 03, 2025 am 12:19 AM

The server session failure can be solved through the following steps: 1. Check the server configuration to ensure that the session is set correctly. 2. Verify client cookies, confirm that the browser supports it and send it correctly. 3. Check session storage services, such as Redis, to ensure that they are running normally. 4. Review the application code to ensure the correct session logic. Through these steps, conversation problems can be effectively diagnosed and repaired and user experience can be improved.

What is the significance of the session_start() function?What is the significance of the session_start() function?May 03, 2025 am 12:18 AM

session_start()iscrucialinPHPformanagingusersessions.1)Itinitiatesanewsessionifnoneexists,2)resumesanexistingsession,and3)setsasessioncookieforcontinuityacrossrequests,enablingapplicationslikeuserauthenticationandpersonalizedcontent.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function