Home  >  Article  >  Backend Development  >  How to improve efficiency when PHP performs large-scale tasks?

How to improve efficiency when PHP performs large-scale tasks?

WBOY
WBOYOriginal
2016-08-18 09:15:361208browse

I have a php that schedules tasks to be executed once a day. The logic is as follows:
1. Connect to the database and Select to read relevant data from the database into an array
2. Loop according to the amount of data obtained. The middle of the loop contains 3 mysql operations ( Select, insert, update each)
3. Close the database connection after the loop

The number of loop executions depends on the mysql_num_rows in step 1, which is basically thousands or tens of thousands.
Then during the loop process, thousands of X3 database operations will be performed continuously in a short period of time, which is very inefficient. And because multiple tasks require a long time to complete due to the number of cycles, a 504 error will occur in nginx.
Moreover, frequent database operations and long connections occupy too many resources, resulting in inefficiency of the entire environment.

How to optimize it?
Please give me some advice, thank you in advance

Reply content:

I have a php that schedules tasks to be executed once a day. The logic is as follows:
1. Connect to the database and Select to read relevant data from the database into an array
2. Loop according to the amount of data obtained. The middle of the loop contains 3 mysql operations ( Select, insert, update each)
3. Close the database connection after the loop

The number of loop executions depends on the mysql_num_rows in step 1, which is basically thousands or tens of thousands.
Then during the loop process, thousands of X3 database operations will be performed continuously in a short period of time, which is very inefficient. And because multiple tasks require a long time to complete due to the number of cycles, a 504 error will occur in nginx.
And frequent database operations and long connections occupy too many resources, resulting in inefficiency of the entire environment.

How to optimize it?
Please give me some advice, thank you in advance

For the situation you mentioned, it is recommended not to use requests to solve the problem. Use crontab to add scheduled tasks to run php scripts in the background. When querying the database, process it in batches, for example, a total of 100,000 items, 1,000 items at a time; if it must be processed one by one and the speed is It is not very fast. It is recommended to process while fetch_row, to avoid putting it into the array first and then looping. Remember to set_time_limit and the timeout of the database connection based on the implementation.

A few thoughts on this kind of long-term task with a slightly larger amount of data:
1. The web environment is not suitable for long-term tasks: the architecture of nginx+php-fpm is not suitable for executing long-term tasks, and the middle All kinds of timeouts can torture people to death. Apache+PHP is better at least to control the timeout. A simple set_time_limit(0) can do it.
2. Task scheduling is implemented through the web: Most PHP frameworks do not have good command line support, or they do not consider command line support when implementing it, so a web-based task distribution mechanism will be easier to implement, which will help The existing framework will be much less intrusive, and for a stable project, it is extremely important to ensure a unified entrance. If the task is run under the command line, there are many issues to consider. The most prominent issue is the file permission issue. Generally, web projects are run by users such as apache, and the owner of the generated files is also apache, and apache generally does not Although login is allowed, it is possible to run commands as the apache user, but it is more complicated.
3. Divide and conquer: One solution for dealing with long-term tasks is divide and conquer, splitting large tasks into small tasks, converting long-term tasks into multiple short-term small tasks, reducing the time occupied by resources, and reducing the time required for long-term tasks. Various problems caused by time execution, such as database connection timeout, PHP memory leak, etc.

Attached is an example I wrote, please give me some advice
https://github.com/zkc226/cur...

When a large amount of data is needed, it is handed over to the task system for execution. First, a request is initiated, and the message producer hands the request to the consumer for processing and returns to avoid waiting for a timeout. Consumers perform multi-threaded processing. It is recommended to use Gearman, which is very convenient to use and supports PHP interface. Others like Workman, Swoole, etc. can be implemented.

All operations are concentrated on the same server and executed at the same point in time, which is definitely time-consuming and resource-consuming.
Either process it in batches like @黄红 said.
Or add more servers and put these tasks Distributed to other servers for execution, so-called distributed processing, but it will increase the complexity of the task, because the consistency of the data must be ensured

1. Export data to a file, read the file and loop. (such as mysqldump)
2. Consider whether you can spell the statements first and execute them in batches. Instead of executing every loop.
3. Consider whether stored procedures can be used

And because of the number of cycles, multiple tasks take a long time to complete, which will cause nginx to display a 504 error.

Is it real-time calculation? For tasks with a large amount of calculations, should we consider running tasks in the background to calculate the write cache and request the read-only cache in real time?

This question is a bit like the question I answered before about parallel execution to improve efficiency

The essence is to divert the reading of this big data, and perform modulo parallel execution based on ID. For example, your server and database can withstand 20 concurrent executions

The simplest way to parallelize is to open 20 script processes for execution
0.php -> select * from test where id%20=0;
1.php -> select * from test where id%20=1;
2.php -> select * from test where id%20=2;
....

This is the pulling method.

Another way is to push it to the queue, and then the queue calls the waker process to execute, which is more standardized and easy to manage. For example, there is a mentioned gearman upstairs. When I was working on the SMS platform, I also had daily scheduled tasks, which was used of this.

The logic is probably that you open a scheduled task script to send all the queried data to the gearman scheduler by calling the gearman client, and then you open 20 workers (can be on the same server or on different servers in the LAN) ), and then the scheduler will assign these 20 gearman worker scripts to execute. The code of each worker script is the same, and it is the execution of one piece of data and one task

Use PHP script in cli mode for processing. Do not use WEB method, it is easy to time out

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn