


Introduction to the use of PHP concurrent multi-process processing tool Gearman, the powerful tool gearman
In our work, we sometimes encounter situations where we need to publish data to multiple servers at the same time, or process it at the same time. Multiple tasks. You can use PHP's curl_multi method to process requests concurrently, but due to some conditions in the network, data, and various servers, the response time of this concurrent processing is very slow, because the process of concurrent requests also includes recording logs, processing data, etc. Logic, waiting for processing results and returning, so it cannot satisfy the background operation experience in a friendly way.
Now there is another solution, using Gearman to achieve concurrency requirements. Send requests to Gearman's Jobs through the Client, and perform operations such as curl_multi, data processing, and logging in each Work. At the same time, use the supervisor to monitor the processes of Gearman and Works, so that a parallel multi-process and load balancing can be achieved. plan.
What Gearman can do:
Asynchronous processing: image processing, order processing, batch emails/notifications, etc.
Processing requiring high CPU or memory: large-capacity data processing, MapReduce operations, log aggregation, video encoding
Distributed and parallel processing
Timing processing: incremental update, data copy
Rate-limited FIFO processing
Distributed system monitoring tasks
Gearman working principle:
Applications using Gearman usually consist of three parts: a Client, a Worker, and a task server. The role of the Client is to propose a Job task and hand it over to the Job Server task server. The Job Server will look for a suitable Worker to complete the task. The Worker executes the Job sent by the Client and returns the results to the Client through the Job Server. Gearman provides Client and Worker APIs, and applications can use these APIs to communicate with the Gearman Job Server. Communication between Client and Worker within Gearman is conducted through TCP connections.
Gearman can distribute the workload of work to different machines.
Installation:
Copy code The code is as follows:
rpm -ivh http://dl.iuscommunity.org/pub/ius/stable/Redhat/6/x86_64/epel-release-6-5.noarch.rpm
yum install -y gearmand
Start:
gearmand -d
Install the PHP Gearman extension
I use pcel to install. You can also download the source code package to compile and install, but remember to install libgearman and re2c first, otherwise the extension compilation and installation will cause errors.
pecl install gearman #If it fails and prompts version problem, you can try pecl install gearman-1.0.3, the default seems to be 1.1.2
Compilation and installation are also very simple
Copy code The code is as follows:
wget -c http://pecl.php.net/get/gearman-1.1.1.tgz
tar zxvf gearman-1.1.1.tgz
phpize
./configure
make && make install
echo "extension=gearman.so" >> /etc/php.ini
PHP interface function
Gearman provides many complete extension functions, including GearmanClient, GearmanJob, GearmanTask, and GearmanWorker. For details, you can check the official PHP manual.
This is one of the officially provided Examples, which is quite an example of concurrent distribution task processing
<?php $client = new GearmanClient(); $client->addServer(); // initialize the results of our 3 "query results" here $userInfo = $friends = $posts = null; // This sets up what gearman will callback to as tasks are returned to us. // The $context helps us know which function is being returned so we can // handle it correctly. $client->setCompleteCallback(function(GearmanTask $task, $context) use (&$userInfo, &$friends, &$posts) { switch ($context) { case 'lookup_user': $userInfo = $task->data(); break; case 'baconate': $friends = $task->data(); break; case 'get_latest_posts_by': $posts = $task->data(); break; } }); // Here we queue up multiple tasks to be execute in *as much* parallelism as gearmand can give us $client->addTask('lookup_user', 'joe@joe.com', 'lookup_user'); $client->addTask('baconate', 'joe@joe.com', 'baconate'); $client->addTask('get_latest_posts_by', 'joe@joe.com', 'get_latest_posts_by'); echo "Fetching...\n"; $start = microtime(true); $client->runTasks(); $totaltime = number_format(microtime(true) - $start, 2); echo "Got user info in: $totaltime seconds:\n"; var_dump($userInfo, $friends, $posts);
gearman_work.php
<?php $worker = new GearmanWorker(); $worker->addServer(); $worker->addFunction('lookup_user', function(GearmanJob $job) { // normally you'd so some very safe type checking and query binding to a database here. // ...and we're gonna fake that. sleep(3); return 'The user requested (' . $job->workload() . ') is 7 feet tall and awesome'; }); $worker->addFunction('baconate', function(GearmanJob $job) { sleep(3); return 'The user (' . $job->workload() . ') is 1 degree away from Kevin Bacon'; }); $worker->addFunction('get_latest_posts_by', function(GearmanJob $job) { sleep(3); return 'The user (' . $job->workload() . ') has no posts, sorry!'; }); while ($worker->work());
I executed gearman_work.php in 3 terminals
ryan@ryan-lamp:~$ ps aux | grep gearman* | grep -v grep gearman 1504 0.0 0.1 60536 1264 ? Ssl 11:06 0:00 /usr/sbin/gearmand --pid-file=/var/run/gearman/gearmand.pid --user=gearman --daemon --log-file=/var/log/gearman-job-server/gearman.log --listen=127.0.0.1 ryan 2992 0.0 0.8 43340 9036 pts/0 S+ 14:05 0:00 php /var/www/gearmand_work.php ryan 3713 0.0 0.8 43340 9036 pts/1 S+ 14:05 0:00 php /var/www/gearmand_work.php ryan 3715 0.0 0.8 43340 9036 pts/2 S+ 14:05 0:00 php /var/www/gearmand_work.php
Let’s check the result of executing gearman_work.php shell
Copy code The code is as follows:
Fetching...
Got user info in: 3.03 seconds:
string(59) "The user requested (joe@joe.com) is 7 feet tall and awesome"
string(56) "The user (joe@joe.com) is 1 degree away from Kevin Bacon"
string(43) "The user (joe@joe.com) has no posts, sorry!"
Seeing the 3.03 seconds above indicates that the tasks requested by the client in the past were distributed and executed in parallel.
In the actual production environment, in order to monitor that the gearmand and work processes have not exited unexpectedly, we can use the Supervisor tool.

大家都知道 Node.js 是单线程的,却不知它也提供了多进(线)程模块来加速处理一些特殊任务,本文便带领大家了解下 Node.js 的多进(线)程,希望对大家有所帮助!

Golang作为一门高并发编程语言,其内置的协程机制和多线程操作实现了轻量级的多任务处理。然而,在多进程处理的场景下,不同进程之间的通信和共享内存成为了程序开发的关键问题。本文将介绍在Golang中实现多进程之间共享内存的应用方法。一、Golang中多进程的实现方式在Golang中,可以通过多种方式实现多进程并发处理,其中包括fork、os.Process、

golang是多进程,其线程模型是MPG模型,整体上Go程与内核线程是多对多对应的,因此首先来讲就一定是多线程的。Golang有些所谓的M比N模型,M个线程下可以创建N个go routine,一般而言N远大于M,本质上属于多线程模型,但是协程的调度由Go的runtime决定,强调开发者应该使用channel进行协程之间的同步。

并发编程中的锁与同步在并发编程中,多个进程或线程同时运行,这可能会导致资源争用和不一致性问题。为了解决这些问题,需要使用锁和同步机制来协调对共享资源的访问。锁的概念锁是一种机制,它允许一次只有一个线程或进程访问共享资源。当一个线程或进程获得锁时,其他线程或进程将被阻止访问该资源,直到锁被释放。锁的类型python中有几种类型的锁:互斥锁(Mutex):确保一次只有一个线程或进程可以访问资源。条件变量:允许线程或进程等待某个条件,然后获取锁。读写锁:允许多个线程同时读取资源,但只允许一个线程写入资

PHP开发技巧:如何使用Gearman定时任务处理MySQL数据库介绍:Gearman是一个开源的分布式任务调度系统,可以用于将任务并行执行,提高系统的处理能力。在PHP开发中,我们常常使用Gearman来处理一些耗时的任务或者异步的任务。本文将介绍如何利用Gearman实现定时任务来处理MySQL数据库操作。一、安装Gearman在Linux系统中,可以通

本篇文章给大家带来了关于PHP的相关知识,其中主要介绍了有关PHP多进程开发的相关问题,这里给大家总结了一些多进程开发问题,附答案,下面一起来看一下,希望对大家有帮助。

node如何实现多进程?如何部署node项目?下面本篇文章带大家掌握Node.js 多进程模型和项目部署的相关知识,希望对大家有所帮助!

python凭借其广泛的库和易于使用的语法,在众多编程领域中备受青睐。然而,对于需要处理大量数据或实时任务的应用程序来说,充分利用Python的潜力至关重要,而并发编程正是实现这一目标的关键。1.多进程多进程并发模型允许您在不同的操作系统进程中同时执行代码。这对于计算密集型任务非常有用,因为每个进程都可以利用单独的CPU核心。以下是一个Python多进程示例:importmultiprocessingdefworker(num):print(f"Process{num}isrunning")if


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Atom editor mac version download
The most popular open source editor
