Home  >  Article  >  Backend Development  >  PHP learning: mining details to improve website performance_PHP tutorial

PHP learning: mining details to improve website performance_PHP tutorial

WBOY
WBOYOriginal
2016-07-20 10:57:32670browse

I believe that the Internet has become more and more an indispensable part of people's lives. Ajax, flex and other rich client applications make people more "happy" to experience many functions that could only be implemented in C/S. For example, Google has moved all the most basic office applications to the Internet. Of course, while it is convenient, it will undoubtedly make the page slower and slower. I am doing front-end development. In terms of performance, according to Yahoo's survey, the back-end only accounts for 5%, while the front-end accounts for as much as 95%, of which 88% can be optimized.

The above is a life cycle diagram of a web2.0 page. Engineers vividly describe it as divided into four stages: "pregnancy, birth, graduation, and marriage." If we can be aware of this process when we click on a web link instead of a simple request-response, we can dig out many details that can improve performance. Today, I listened to a lecture by Taobao Xiaoma on the research of web performance by the Yahoo development team. I felt that I gained a lot and wanted to share it on my blog.

I believe many people have heard of the 14 rules for optimizing website performance. More information can be found at developer.yahoo.com

There is a plug-in yslow under firefox, which is integrated in firebug. You can use it to easily check the performance of your website in these aspects.

This is the result of using yslow to evaluate my website Xifengfang. Unfortunately, it only has a score of 51. hehe. The scores of major Chinese websites are not high. I just took a test and both Sina and NetEase scored 31. Then the score of Yahoo (USA) is indeed 97 points! This shows Yahoo's efforts in this regard. Judging from the 14 rules they summarized and the 20 newly added points, there are many details that we really don't think about at all, and some practices are even a little "perverted".

The first one is to reduce the number of HTTP requests as much as possible (Make Fewer HTTP Requests)

http requests are expensive, so it is natural to find ways to reduce the number of requests. Improve web page speed. Commonly used methods include merging css, js (merging css and js files in one page respectively), image maps and css sprites, etc. Of course, perhaps splitting css and js files into multiple files is due to considerations such as css structure and sharing. Alibaba's Chinese website approach at that time was to develop it separately, and then merge js and css in the background. This way it was still one request for the browser, but it could still be restored into multiple ones during development, which facilitated management and repeated references. . Yahoo even recommends writing the css and js of the homepage directly into the page file instead of external references. Because the number of visits to the homepage is too large, this can also reduce the number of requests by two. In fact, many domestic portals do this.

Css sprites only use the background images on the page to be merged into one, and then use the value defined by the background-position property of css to get its background. Taobao and Alibaba Chinese sites currently do this. If you are interested, you can take a look at the background images of Taobao and Alibaba.

http://www.csssprites.com/ This is a tool website that can automatically merge the images you upload and give the corresponding background-position coordinates. And output the results in png and gif formats.

Second, use a CDN (Content Delivery Network): Use a Content Delivery Network

To be honest, I don’t know much about CDN. In other words, by adding a new layer of network architecture to the existing Internet, the content of the website is published to the cache server closest to the user. Through DNS load balancing technology, the source of the user is determined to access the cache server nearby to obtain the required content. , users in Hangzhou access content on servers near Hangzhou, and users in Beijing access content on servers near Beijing. This can effectively reduce the time for data transmission on the network and increase the speed. For more detailed information, you can refer to the explanation of CDN on Baidu Encyclopedia. Yahoo! distributes static content to a CDN and reduces user impact time by 20% or more.

Article 3. Add Expire/Cache-Control header: Add an Expires Header

Now more and more pictures, scripts, css, flash are embedded in In the pages, when we visit them, we will inevitably make many http requests. In fact, we can cache these files by setting the Expires header. Expire actually specifies the cache time of a specific type of file in the browser through the header message. Most of the pictures in Flash do not need to be modified frequently after they are released. After caching, the browser will not need to download these files from the server in the future but will read them directly from the cache. This will speed up accessing the page again. will be greatly accelerated. The header information returned by a typical HTTP 1.1 protocol:

<ol class="dp-c">
<li class="alt"><span><span>HTTP/1.1 200 OK   </span></span></li>
<li>
<span class="func">Date</span><span>: Fri, 30 Oct 1998 13:19:41 GMT   </span>
</li>
<li class="alt"><span>Server: Apache/1.3.3 (Unix)   </span></li>
<li><span>Cache-Control: max-age=3600, must-revalidate   </span></li>
<li class="alt"><span>Expires: Fri, 30 Oct 1998 14:19:41 GMT   </span></li>
<li><span>Last-Modified: Mon, 29 Jun 1998 02:28:12 GMT   </span></li>
<li class="alt"><span>ETag: “3e86-410-3596fbbc”   </span></li>
<li><span>Content-Length: 1040   </span></li>
<li class="alt"><span>Content-Type: text/html  </span></li>
</ol>

This can be done by setting Cache-Control and Expires through server-side scripts.

For example, setting the expiration date after 30 days in php

<ol class="dp-c">
<li class="alt"><span><span><!--pHeader(</span><span class="string">"Cache-Control: must-revalidate"</span><span>);   </span></span></li>
<li>
<span class="vars">$offset</span><span> = 60 * 60 * 24 * 30;   </span>
</li>
<li class="alt">
<span class="vars">$ExpStr</span><span> = </span><span class="string">"Expires: "</span><span> . </span><span class="func">gmdate</span><span>(</span><span class="string">"D, d M Y H:i:s"</span><span>, time() + </span><span class="vars">$offset</span><span>) . </span><span class="string">" GMT"</span><span>;   </span>
</li>
<li>
<span>Header(</span><span class="vars">$ExpStr</span><span>);-->  </span>
</li>
</ol>

can also be done by configuring the server itself. These are not very clear, haha. Friends who want to know more can refer to http://www.web-caching.com/

As far as I know, the current expiration time of Alibaba Chinese website Expires is 30 days. However, there have been problems during the period, especially the setting of script expiration time should be carefully considered, otherwise it may take a long time for the client to "perceive" such changes after the corresponding script function is updated. I have encountered this problem before when I was working on [suggest project]. Therefore, what should be cached and what should not be cached should be carefully considered.

Item 4. Enable Gzip compression: Gzip Components

The idea of ​​Gzip is to compress the file on the server side first and then transmit it. This can significantly reduce the size of file transfers. After the transmission is completed, the browser will re-decompress the compressed content and execute it. All current browsers support gzip "well". Not only can browsers recognize it, but also major "crawlers" can also recognize it. SEOers can rest assured. Moreover, the compression ratio of gzip is very large, and the general compression ratio is 85%. This means that a 100K page on the server side can be compressed to about 25K before being sent to the client. For the specific Gzip compression principle, you can refer to the article "Gzip Compression Algorithm" on csdn. Yahoo particularly emphasizes that all text content should be gzip compressed: html (php), js, css, xml, txt... Our website has done a good job in this regard, it is an A. In the past, our homepage was not A, because there were many js placed by advertising codes on the homepage. The js of the website of the owner of these advertising codes had not been gzip compressed, which would also drag down our website.

Most of the above three points are server-side contents, and I only have a superficial understanding of them. Please correct me if I am wrong.

Item 5: Put Stylesheets at the Top of the page

Put Stylesheets at the top of the page. Why is this? Because browsers such as IE and Firefox will not render anything until all the CSS has been transmitted. The reason is as simple as what Brother Ma said. css, the full name is Cascading Style Sheets (cascading style sheets). Cascading means that the following css can cover the previous css, and higher-level css can cover lower-level css. In [css! important] This hierarchical relationship was briefly mentioned at the bottom of this article. Here we only need to know that css can be overridden. Since the previous one can be overwritten, it is undoubtedly reasonable for the browser to render it after it is completely loaded. In many browsers, such as IE, the problem with placing the style sheet at the bottom of the page is that it prohibits the sequential display of web content. The browser blocks display to avoid redrawing page elements, and the user only sees a blank page. Firefox does not block display, but this means that when the stylesheet is downloaded, some page elements may need to be redrawn, which causes flickering issues. So we should let the css be loaded as soon as possible

Following this meaning, if we study it more carefully, there are actually areas that can be optimized. For example, the two css files included on this site,

Article 6. Put Scripts at the Bottom of the page (Put Scripts at the Bottom)

Put the script There are two purposes for placing it at the bottom of the page:

1. To prevent the execution of script scripts from blocking the download of the page. During the page loading process, when the browser reads the js execution statement, it will interpret it all and then read the following content. If you don’t believe it, you can write a js infinite loop to see if the things below the page will come out. (The execution of setTimeout and setInterval is somewhat similar to multi-threading, and the following content rendering will continue before the corresponding response time.) The logic of the browser doing this is because js may execute location.href at any time or otherwise completely interrupt this page The function of the process, that is, of course has to wait until it is executed before loading. Therefore, placing it at the end of the page can effectively reduce the loading time of the visual elements of the page.

2. The second problem caused by the script is that it blocks the number of parallel downloads. The HTTP/1.1 specification recommends that the number of parallel downloads per host of the browser should not exceed 2 (IE can only be 2, other browsers such as FF are set to 2 by default, but the new ie8 can reach 6) . So if you distribute the image files to multiple machines, you can achieve more than 2 parallel downloads. But while the script file is downloading, the browser does not initiate other parallel downloads.

Of course, for each website, the feasibility of loading scripts at the bottom of the page is still questionable. For example, the page of Alibaba Chinese website. There are inline js in many places, and the display of the page relies heavily on this. I admit that this is far from the concept of non-intrusive scripts, but many "historical problems" are not so easy to solve.

Article 7. Avoid using Expressions in CSS (Avoid CSS Expressions)

But this adds two more layers of meaningless nesting, which is definitely not good. A better way is needed.

Article 8. Put both javascript and css into external files (Make JavaScript and CSS External)

I think this is easy to understand. This is done not only from the perspective of performance optimization, but also from the perspective of ease of code maintenance. Writing css and js in the page content can reduce 2 requests, but it also increases the size of the page. If the css and js have been cached, there will be no extra http requests. Of course, as I said before, some special page developers will still choose inline css and js files.

Article 9. Reduce DNS Lookups

On the Internet, there is a one-to-one correspondence between domain names and IP addresses. Domain names (kuqin.com ) is easy to remember, but the computer does not recognize it, and the "recognition" between computers must be converted into an IP address. Each computer on the network corresponds to an independent IP address. The conversion between domain names and IP addresses is called domain name resolution, also known as DNS query. A DNS resolution process will take 20-120 milliseconds. Before the DNS query is completed, the browser will not download anything under the domain name. Therefore, reducing the time of DNS query can speed up the loading speed of the page. Yahoo recommends that the number of domain names contained in a page should be limited to 2-4. This requires a good planning for the page as a whole. At present, we are not doing well in this regard, and many advertising delivery systems are dragging us down.

Article 10. Compress JavaScript and CSS (Minify JavaScript)

The effect of compressing js and css is obviously to reduce the number of bytes on the page. Pages with small capacity will naturally load faster. In addition to reducing the volume, compression can also provide some protection. We do this well. Commonly used compression tools include JsMin, YUI compressor, etc. In addition, http://dean.edwards.name/packer/ also provides us with a very convenient online compression tool. You can see the difference in capacity between compressed js files and uncompressed js files on the jQuery web page:

Of course, one of the disadvantages of compression is that the readability of the code is lost. I believe many front-end friends have encountered this problem: the effect of looking at Google is cool, but looking at its source code is a lot of characters squeezed together, and even the function names have been replaced. It’s so sweaty! Wouldn't it be very inconvenient to maintain your own code like this? The current approach adopted by all Alibaba Chinese websites is to compress js and css on the server side when they are released. This makes it very convenient for us to maintain our own code.

Article 11, Avoid Redirects (Avoid Redirects)

I saw the article "Internet Explorer and Connection Limits" on ieblog not long ago, such as When you enter http://www.enet.com.cn/eschool/, the server will automatically generate a 301 server redirection to http://www.enet.com.cn/eschool/. You can see it in the address bar of the browser. You can tell. This kind of redirection naturally takes time. Of course, this is just an example. There are many reasons for redirection, but what remains the same is that every additional redirection will increase a web request, so it should be reduced as much as possible.

Article 12. Remove Duplicate Scripts

I know this without even saying it, not only from a performance perspective, but also from a code perspective This is also true from a normative perspective. But we have to admit that many times we add some code that may be repeated because the picture is so fast. Perhaps a unified css framework and js framework can better solve our problems. Xiaozhu's point of view is right. Not only should it not be repeated, but it should also be reusable.

Article 13. Configure Entity Tags (ETags) (Configure ETags)

I don’t understand this either, haha. I found a more detailed explanation on inforQ "Using ETags to Reduce Web Application Bandwidth and Load". Interested students can check it out.

Article 14. Make Ajax Cacheable

Ajax still needs to be cached? When making an ajax request, a timestamp is often added to avoid caching. It’s important to remember that “asynchronous” does not imply “instantaneous”. Remember, even if AJAX messages are generated dynamically and only affect one user, they can still be cached.


www.bkjia.comtruehttp: //www.bkjia.com/PHPjc/445757.htmlTechArticleI believe that the Internet has become more and more an indispensable part of people's lives. Rich client applications such as ajax, flex, etc. make people happier and more happy to experience many things that could only be realized in C/S...
Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn