


Yahoo team experience: 34 golden rules for website page performance optimization
1. Minimize the number of HTTP requests
80% of the end user’s response time is used to download various items content. This part of the time includes downloading images, style sheets, scripts, Flash, etc. in the page. The number of HTTP requests can be reduced by reducing the number of elements in the page. This is a crucial step in improving web page speed.
The way to reduce page components is actually to simplify the page design. So is there a way to maintain the richness of page content and speed up response time? Here are a few techniques for reducing the number of HTTP requests while potentially keeping your page content rich.
Merge files is a way to reduce HTTP requests by putting all scripts into one file. For example, you can simply put all CSS files into one style sheet. When scripts or style sheets need to be modified differently when used in different pages, this may be relatively troublesome, but even so, this method should be regarded as an important step in improving page performance.
CSS Sprites are an effective way to reduce image requests. Put all the background images into a picture file, and then use the CSS background-image and background-position properties to display different parts of the picture;
Picture map integrates multiple pictures into one picture middle. Although the overall size of the file will not change, the number of HTTP requests will be reduced. Image maps can only be used if all components of the image are close together on the page, such as a navigation bar. Determining the coordinates and sum of images may be tedious and error-prone. At the same time, using image map navigation is not readable, so this method is not recommended;
Inline images use the data:URL scheme. Image data is loaded into the page. This may increase the size of the page. Placing inline images in a stylesheet (cacheable) can reduce HTTP requests without increasing the page file size. But inline images are not yet supported by mainstream browsers.
Reducing the number of HTTP requests for the page is the first step you need to do. This is the most important way to improve first-time user wait times. As Tenni Theurer explains on her blog Browser Cahe Usage - Exposed!, HTTP requests take up 40% to 60% of response time without caching. Make the experience faster for those visiting your website for the first time!
2. Reduce the number of DNS lookups
The Domain Name System (DNS) provides the corresponding relationship between domain names and IPs, just like the relationship between names and their phone numbers in the phone book. When you enter www.dudo.org in the browser address bar, the DNS resolution server will return the IP address corresponding to this domain name. The process of DNS resolution also takes time. Generally, it takes 20 to 120 milliseconds to return the IP address corresponding to a given domain name. And during this process the browser will do nothing until the DNS lookup is completed.
Caching DNS lookups can improve page performance. This kind of caching requires a specific caching server, which is generally controlled by the user's ISP provider or local LAN, but it will also generate a cache on the computer used by the user. DNS information will be retained in the DNS cache of the operating system (DNS Client Service in Microsoft Windows systems). Most browsers have their own cache that is independent of the operating system. Since the browser has its own cache record, it is not affected by the operating system during a request.
By default, Internet Explorer caches DNS lookup records for 30 minutes, and its key value in the registry is DnsCacheTimeout. Firefox's cache time for DNS lookup records is 1 minute, and its option in the configuration file is network.dnsCacheExpiration (Fasterfox changed this option to 1 hour).
When the DNS cache in the client is empty (both browser and operating system are empty), the number of DNS lookups is the same as the number of host names in the page. This includes the host names contained in URLs, images, script files, style sheets, Flash objects, etc. in the page. Reducing the number of hostnames reduces the number of DNS lookups.
Reducing the number of hostnames can also reduce the number of parallel downloads on the page. Reducing the number of DNS lookups saves response time, but reducing parallel downloads increases response time. My guideline is to split the content on these pages into at least two but no more than four parts. The result is a trade-off between reducing the number of DNS lookups and maintaining a high degree of parallel downloads.
3. Avoid jumps
Jumps are implemented using 301 and 302 codes. The following is an HTTP header with response code 301:
HTTP/1.1 301 Moved Permanently
Location: http://example.com/newuri
Content-Type: text/html
The browser will point the user to the URL specified in Location. All information in the header file is required in a jump, and the content part can be empty. Regardless of their name, 301 and 302 responses will not be cached unless an additional header option such as Expires or Cache-Control is added to specify that it be cached. The refresh tag and JavaScript of the
But remember that jumping will reduce the user experience. Adding a jump between the user and the HTML document will delay the display of all elements in the page, because no files (images, Flash, etc.) will be downloaded before the HTML file is loaded.
There is a jump phenomenon that is often ignored by web developers but often wastes response time. This phenomenon occurs when the URL should have a slash (/) but is ignored. For example, when we want to visit http://astrology.yahoo.com/astrology, what is actually returned is a jump containing a 301 code, which points to http://astrology.yahoo.com/astrology/ (note trailing slash). In the Apache server, you can use Alias or mod_rewrite or the DirectorySlash to avoid this.
Connecting new websites and old websites is another situation where the jump function is often used. In this case, it is often necessary to connect different content of the website and then jump according to different types of users (such as browser type, user account type). It is very simple to use jumps to switch between two websites, and the amount of code required is not much. Although using this approach reduces complexity for developers, it also reduces user experience. An alternative is to use Alias and mod_rewrite if both are on the same server. If the redirect is due to different domain names, you can use Alias or mod_rewirte to create a CNAME (a DNS record that saves the relationship between one domain name and another domain name) instead.
4. Cacheable AJAX
One of the often mentioned benefits of Ajax is the immediacy of feedback it brings to users due to its asynchronous nature of transmitting information from the backend server. However, using Ajax does not guarantee that users will not spend time waiting for asynchronous JavaScript and XML responses. In many applications, whether the user needs to wait for a response depends on how Ajax is used. For example, in a Web-based email client, users must wait for Ajax to return email query results that meet their criteria. It's important to remember that "asynchronous" does not mean "immediate".
In order to improve performance, it is important to optimize Ajax responses. The most important method to improve Ajxa performance is to make the response cacheable. For a detailed discussion, see Add an Expires or a Cache-Control Header. Several other rules also apply to Ajax:
Gizp compressed files
Reduce the number of DNS lookups
Streamline JavaScript
Avoid jumps
Configure ETags
Let’s take a look An example: A Web 2.0 email client will use Ajax to automatically download the user's address book. If the user has not made any changes to the address book since the last time they used the Email web application, and the Ajax response is cached via the Expire or Cacke-Control header, then the address book can be read directly from the last cache. . The browser must be told whether to use the cached address book or send a new request. This can be achieved by adding a timestamp containing the last edit time to the Ajax URL that reads the address book, for example, &t=11900241612, etc. If the address book has not been edited since the last download, the timestamp will not change, and it will be loaded from the browser's cache, thereby eliminating one HTTP request process. If the user has modified the address book, the timestamp is used to determine that the new URL does not match the cached response, and the browser will issue a request to update the address book.
Even if your Ajxa response is dynamically generated, even if it only applies to one user, it should be cached. Doing so can make your Web 2.0 applications faster.
5. Defer loading content
You can take a closer look at your web page and ask yourself "What content must be loaded first when the page is rendered? What content and structure can be loaded later?
To separate the entire process into two parts based on the onload event, JavaScript is an ideal choice. For example, if you have JavaScript for drag and drop and animation, then it can wait for the drag and drop element on the page to be loaded later. It occurs after the initial rendering. Other content such as hidden parts (content that appears after user operation) and images in the collapsed part can also be delayed.
Tools can save you work: YUI Image Loader can help you defer loading images in the fold, and YUI Get utility is a convenient way to include JS and CSS. For example, you can open Firebug's Net tab and take a look at Yahoo's homepage.
Performance goals and other website development practices complement each other when they are aligned. In this case, the method of improving website performance through programs tells us that if JavaScript is supported, the user experience can be removed first, but this must ensure that your website can run normally without JavaScript. After making sure the page is running properly, load the script to implement more fancy effects such as drag-and-drop and animation.
6. Preloading
Preloading and postloading seem to be exactly the opposite, but in fact preloading is to achieve another goal. Preloading is when the browser is idle, requesting page content (such as images, stylesheets, and scripts) that may be needed in the future. Using this method, when the user wants to access the next page, most of the content in the page has been loaded into the cache, so the access speed can be greatly improved.
Several preloading methods are provided below:
Unconditional loading: When the onload event is triggered, additional page content is loaded directly. Taking Google.com as an example, you can see how its spirit image is loaded in onload. This spirit image is not required on the google.com homepage, but it can be used on the search results page.
Conditional loading: Based on the user's operations, we can basely determine the pages that the user may go to next and preload the page content accordingly. In search.yahoo.com you can see how additional page content is loaded as you type.
Have anticipatory loading: Use preloading when loading redesigned pages. This situation often occurs when users complain that "the new page looks cool but is slower than before" after the page is redesigned. The problem may be that users have a full cache of your old site but nothing cached for the new site. Therefore, you can avoid this result by loading a piece of content before visiting the new site. In your old site, use the browser's spare time to load the images and scripts used in the new site to improve access speed.
7. Reduce the number of DOM elements
A complex page means that more data needs to be downloaded, and it also means that JavaScript traverses the DOM more slowly. For example, when you add an event handler, the effect of looping through 500 and 5000 DOM elements will definitely be different.
The existence of a large number of DOM elements means that there are parts of the page that can be streamlined without removing the content and just replacing the element tags. Do you use tables in your page layout? Have you ever introduced more
YUI CSS utilities can bring great help to your layout: grids.css can help you achieve the overall layout, font.css and reset.css can help you remove the browser default format. It provides an opportunity to revisit the tags in your page, such as using
The number of DOM elements is easy to calculate. You only need to enter in the Firebug console:
document.getElementsByTagName('*').length
So how many DOM elements are too many? This can be compared to similar pages that have good markup usage. For example, the Yahoo! homepage is a page with a lot of content, but it only uses 700 elements (HTML tags).
8. Divide the page content according to the domain name
Dividing the page content into several parts can enable you to maximize parallel downloading. Due to the impact of DNS lookups you first need to make sure that the number of domain names you use is between 2 and 4. For example, you can put the HTML content and dynamic content used on www.example.org, and store various components of the page (images, scripts, CSS) on statics1.example.org and statics.example.org respectively. .
You can find more relevant information in the article Maximizing Parallel Downloads in the Carpool Lane co-written by Tenni Theurer and Patty Chi.
9. Minimize the number of iframes
The ifrmae element can insert a new HTML document into the parent document. It's important to understand how iframes work so you can use them more effectively.
of the document seems to speed up the page download speed. This is because placing the style sheet within
Performance-focused front-end servers often want pages to load in an orderly manner. At the same time, we also hope that the browser will display the received content as much as possible. This is especially important for pages with a lot of content and for users with slow connections. Returning visual feedback to the user, such as a process pointer, has been well studied and formally documented. In our study the HTML page is the process pointer. When the browser loads the file header in an orderly manner, the navigation bar, the logo at the top, etc. can all serve as visual feedback for users waiting for the page to load. This improves the user experience overall.
The problem with placing the stylesheet at the bottom of the document is that it interrupts the orderly rendering of content in many browsers, including Internet Explorer. The browser aborts rendering to avoid redrawing page elements caused by style changes. The user has to face a blank page.
The HTML specification clearly states that style sheets should be included in the

GeforceExperience不仅为您下载最新版本的游戏驱动程序,它还提供更多!最酷的事情之一是它可以根据您的系统规格优化您安装的所有游戏,为您提供最佳的游戏体验。但是一些游戏玩家报告了一个问题,即GeForceExperience没有优化他们系统上的游戏。只需执行这些简单的步骤即可在您的系统上解决此问题。修复1–为所有游戏使用最佳设置您可以设置为所有游戏使用最佳设置。1.在您的系统上打开GeForceExperience应用程序。2.GeForceExperience面

Nginx是一种常用的Web服务器,代理服务器和负载均衡器,性能优越,安全可靠,可以用于高负载的Web应用程序。在本文中,我们将探讨Nginx的性能优化和安全设置。一、性能优化调整worker_processes参数worker_processes是Nginx的一个重要参数。它指定了可以使用的worker进程数。这个值需要根据服务器硬件、网络带宽、负载类型等

如果您在Windows机器上玩旧版游戏,您会很高兴知道Microsoft为它们计划了某些优化,特别是如果您在窗口模式下运行它们。该公司宣布,最近开发频道版本的内部人员现在可以利用这些功能。本质上,许多旧游戏使用“legacy-blt”演示模型在您的显示器上渲染帧。尽管DirectX12(DX12)已经利用了一种称为“翻转模型”的新演示模式,但Microsoft现在也正在向DX10和DX11游戏推出这一增强功能。迁移将改善延迟,还将为自动HDR和可变刷新率(VRR)等进一步增强打

随着互联网的不断发展和应用的扩展,越来越多的网站和应用需要处理海量的数据和实现高流量的访问。在这种背景下,对于PHP和MySQL这样的常用技术,缓存优化成为了非常必要的优化手段。本文将在介绍缓存的概念及作用的基础上,从两个方面的PHP和MySQL进行缓存优化的实现,希望能够为广大开发者提供一些帮助。一、缓存的概念及作用缓存是指将计算结果或读取数据的结果缓存到

昨天一个跑了220个小时的微调训练完成了,主要任务是想在CHATGLM-6B上微调出一个能够较为精确的诊断数据库错误信息的对话模型来。不过这个等了将近十天的训练最后的结果令人失望,比起我之前做的一个样本覆盖更小的训练来,差的还是挺大的。这样的结果还是有点令人失望的,这个模型基本上是没有实用价值的。看样子需要重新调整参数与训练集,再做一次训练。大语言模型的训练是一场军备竞赛,没有好的装备是玩不起来的。看样子我们也必须要升级一下实验室的装备了,否则没有几个十天可以浪费。从最近的几次失败的微调训练来看

MySQL是目前最流行的关系型数据库之一,但是在处理大量数据时,MySQL的性能可能会受到影响。其中,一种常见的性能瓶颈是查询中的LIKE操作。在MySQL中,LIKE操作是用来模糊匹配字符串的,它可以在查询数据表时用来查找包含指定字符或者模式的数据记录。但是,在大型数据表中,如果使用LIKE操作,它会对数据库的性能造成影响。为了解决这个问题,我们可

5月26日消息,SnapchatAR试穿滤镜技术升级,并与OPI品牌合作,推出指甲油AR试用滤镜。据悉,为了优化AR滤镜对手指甲的追踪定位,Snap在LensStudio中推出手部和指甲分割功能,允许开发者将AR图像叠加在指甲这种细节部分。据青亭网了解,指甲分割功能在识别到人手后,会给手部和指甲分别设置掩膜,用于渲染2D纹理。此外,还会识别用户个人指甲的底色,来模拟指甲油真实上手的效果。从演示效果来看,新的AR指甲油滤镜可以很好的模拟浅蓝磨砂质地。实际上,此前Snapchat曾推出AR指甲油试用

Go语言是一门相对年轻的编程语言,虽然从语言本身的设计来看,其已经考虑到了很多优化点,使得其具备高效的性能和良好的可维护性,但是这并不代表着我们在开发Go应用时不需要优化和重构,特别是在长期的代码积累过程中,原来的代码架构可能已经开始失去优势,需要通过优化和重构来提高系统的性能和可维护性。本文将分享一些在Go语言中优化和重构的方法,希望能够对Go开发者有所帮


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Atom editor mac version download
The most popular open source editor

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

Dreamweaver CS6
Visual web development tools
