Home  >  Article  >  Backend Development  >  PHP junk code optimization operation code_PHP tutorial

PHP junk code optimization operation code_PHP tutorial

WBOY
WBOYOriginal
2016-07-21 15:34:43793browse

The company has several websites hosted on virtual hosts in the United States. The mysql service on the server suddenly hangs up almost every day, and then recovers after a while. It is suspected that it has exceeded the CPU usage limit and was automatically terminated. But there is actually very little traffic on this server. So I contacted the Indian Asan customer service of the server provider earlier to see if other users had caused too many problems and caused everyone to die. After searching, the Asans swore to pat their hairy chests to ensure that they were not theirs. Problem, things are not resolved. It's not a problem, so I had to check it myself. Fortunately, I can access the information_schema library. After looking it up, I'm speechless. The data in user_statistics shows that one of our mysql users has extremely high busy_time, cpu_time and other indicators. Fortunately, Ah San didn't notice it. So I quickly checked the program. The previous website program was not made by me, but I knew that there were many problems in it, ranging from architecture to implementation. However, there were not so many pages as usual. The code was mixed with HTML. I couldn’t die after looking at it all. (This kind of At that time, I especially felt how wonderful MVC was), and it was enough to just run it normally, since there was not much traffic anyway.

Since mysql has a heavy burden, let’s find this first. Make a local mirror of the website and run it. Modify and add

Copy code The code is as follows:
[mysqld]
log="d:/temp/mysql.log"
log_slow_queries="d:/temp/ mysql_slow.log"
long_query_time=1


This directory must already exist. Restart the mysql service and you can record.

After checking the sql records, I was surprised. The number of queries was astonishing. On any page, there were dozens of sql queries, and as many as thousands!

Take the forum as an example, the number of database queries for a page is only 10 times, and it can be even lower if cache is used. Calculating it this way, it’s equivalent to dozens of times the original burden. How can we not worry about it?

No one has the perseverance to write down hundreds of queries, so it must be a circular query. The sql statement also shows this. Once you know the reason, it’s easy to change it. Find the relevant page and change the loop query. For example, there is a page that wants to display all regional categories and the number of articles under this category. Ignore the structural optimization of the database. As far as the program is concerned, it turns out that It probably looks like this

Copy code The code is as follows:
$sql1="SELECT aid,count(*) as cc FROM pz_content WHERE uid=$uid group by aid";
$rs1=$db->query($sql1);
if(is_array($rs1)){
foreach($rs1 as $r1){
Output...
echo id2name($r1->aid);
}
}
............
function id2name($aid)
{
$sql="select ename from pz_area_a where aid_a=".$id;
$result=mysql_query($sql);
$row=mysql_fetch_object( $result);
return $row->ename;
}

Regardless of the fault tolerance of the code, you will know from the implementation that he first read the user's relevant articles and The regions are grouped and counted according to region ID, and then the region name is output for each region. Therefore, if there are 10,000 regions, 10,000 queries will be needed here. I put a timing code and looked at it. It consumes about 6M of memory, takes 16 seconds to execute, and has a total of 1001 queries

In fact, this is something that can be done with just one sentence of SQL, and no loop is required. .


Copy code The code is as follows:
$sql1="select pz_area.aid,pz_area.ename,tb1. cc from pz_area right join (SELECT aid,count(*) as cc FROM pz_content WHERE uid=$uid group by aid) as tb1 on pz_area.aid=tb1.aid";
$rs1=$db->query ($sql1);
if(is_array($rs1)){
foreach($rs1 as $r1){
Output...
echo $r1->ename;
}
}


The problem will be solved. After re-running, the memory consumption is almost the same. For one query, the CPU execution time is only 647 milliseconds, which is 26 times worse than the original! Looking at it again, I found that the pz_content table has quite a lot of records. There are often queries and divisions by region, but it turns out there is no index. By adding an index to aid, the execution time is reduced to 432 milliseconds.

Forget about this page, let’s stop here and continue next time.

www.bkjia.comtruehttp: //www.bkjia.com/PHPjc/322364.htmlTechArticleThe company has several websites hosted on virtual hosts in the United States. The mysql service on the server suddenly fails almost every day. When did it hang up and then recover after a while, I suspect it was beyond...
Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn