Home >Web Front-end >JS Tutorial >Performance optimization method for Ajax non-refresh paging

Performance optimization method for Ajax non-refresh paging

亚连
亚连Original
2018-05-25 11:39:391520browse

This article mainly introduces the relevant information on the performance optimization method of Ajax non-refresh paging. Friends who need it can refer to it

Ajax non-refresh paging is already a thing that everyone is familiar with, probably There is a js method on the web front-end page, which requests the server-side paging data interface through Ajax. After getting the data, it creates an html structure on the page and displays it to the user, similar to the following:

<script type=”text/javascript”>
function getPage(pageIndex){
ajax({
url:” RemoteInterface.cgi”,
method:”get”,
data:{pageIndex:pageIndex},
callback:callback
});
}
function callback(datalist){
//todo:根据返回的datalist数据创建html结构展现给用户。
}
</script>

Among them, RemoteInterface.cgi is a server-side interface. We are limited in space here, and the example codes involved may not be complete, just to express the meaning clearly.

On the UI, there may be various styles of paging controls, which everyone is familiar with, such as:

#But it is nothing more than the user clicking on the control to trigger the getPage(pageIndex) method here. This getPage method may not be that simple.

If you follow the way code snippet 1 is written, we can imagine that every time the user clicks to turn the page, RemoteInterface.cgi will be requested once, ignoring the possible update of the data, except for the first time, and later The remote interface requests triggered by each getPage(1), getPage(2), getPage(3), etc. and the data traffic to and from the network are actually repetitive and unnecessary. This data can be cached on the page in some form when each page is requested for the first time. If the user is interested in looking back at the page he has turned before, the getPage method should first check whether the local cache contains the page data. If If yes, it will be directly re-presented to the user instead of calling the remote interface. According to this idea, we can modify the code snippet 1 as follows:

<script type=”text/javascript”>
var pageDatalist={};
function getPage(pageIndex){
if(pageDatalist[pageIndex]){ //如果本地的数据列表中包含当前请求页码的数据
showPage(pageDatalist[pageIndex])//直接展现当前数据
}
else
{
ajax({
url:” RemoteInterface.cgi”,
method:”get”,
data:{pageIndex:pageIndex},
callback:callback
});
}
}
function callback(pageIndex,datalist){
pageDatalist[pageIndex]= datalist; //缓存数据
showPage(datalist);//展现数据
}
function showPage(datalist){
//todo:根据返回的datalist数据创建html结构展现给用户。
}

</script>

This will save the round trip time of network requests, and more importantly The purpose is to save valuable network traffic and reduce the burden on the interface server. In low network speed environments or when the operating pressure of the interface server is already relatively high, this necessary improvement can show obvious optimization effects. The first of the famous 34 Yahoo rules is to minimize the number of HTTP requests. Ajax asynchronous requests are undoubtedly within the scope of http requests. Web applications with low traffic may feel unnecessary, but imagine if there is a page like this: 10 million visits per day, users turn an average of 5 pages, and one page is viewed repeatedly. Then such a page, according to the way code snippet 1 is written, will trigger an average of 50 million data requests per day, and according to the way code snippet 2 is written, it can reduce at least 10 million requests per day on average. If the amount of data requested each time is 20kb, you can save 10 million * 20kb = 200,000,000kb, which is approximately 190G of network traffic. The resources saved in this way are quite considerable.

If you want to continue to delve deeper, the data caching method in code snippet 2 is worth discussing. We previously assumed that the timeliness of paging data can be ignored, but in actual applications, timeliness is an unavoidable issue. Caching will undoubtedly lead to a reduction in timeliness. A real caching solution should also rely on the analysis and trade-offs of the application's timeliness requirements.

For content that generally does not emphasize timeliness, the cache on the page should still be acceptable. First, the user will not stay on one page all the time. When there is a jump between pages and reloading, it can Get updated data. In addition, if the user has the habit of refreshing the page, he can choose to refresh the page when he particularly wants to see whether there is any data update in the list. If you are pursuing perfection, you can also consider setting a time range, such as 5 minutes. If the user has stayed on the current page for more than 5 minutes, then his page turning within 5 minutes will first read the cache on the page, and page turning after 5 minutes will request the server data again.

In some cases, if we can predict the frequency of data update, such as how many days there may be a data update, we can even consider using local storage to trigger a request for server data after a certain period of time, so that the request The savings in data and traffic are even more thorough. Of course, what kind of caching method is suitable ultimately depends on the timeliness requirements of the product, but the principle is to save requests and traffic as much as possible, especially for pages with a large number of visits.

For a type of data with high timeliness requirements, is caching completely inappropriate? Of course not, but the overall idea needs to change. Generally speaking, the so-called changes may mainly mean that the data in the list has been increased, decreased, or changed, but the vast majority of the data remains unchanged. In most cases, the settings mentioned above are still applicable for caching within a certain period of time.

If there is a requirement close to real-time update of data, you may easily think of using a timer, such as executing the getPage(pageIndex) method and redrawing the list every 20 seconds. But as long as you think of the previous assumption of 10 million page visits, you will find that this is undoubtedly a super scary thing. With this number of visits and the frequency of retries, the server is under great pressure. Regarding how to deal with this situation, I would like to ask everyone to take a look at how Gmail, 163 Mailbox and Sina Mailbox handle mailing list pages. They almost simultaneously satisfied our previous assumptions: extremely large daily visits, real-time update of data requirements, etc. It is not difficult to analyze with a network packet capture tool and find that they will not make a request to the server when the user repeatedly requests data for the same page number. In order to ensure that users are notified in time and the mailing list is updated when there is a message update, a scheduled and repeated asynchronous request can be used, but this request is only for a status query, rather than refreshing the list. Only when the status with message updates is obtained will a request be initiated to obtain updated data, or the status query interface will directly return the updated data when an update is found. In fact, the scheduled status query interval of 163 mailbox is set relatively long, about once every two minutes. The interval of Sina mailbox is longer, about once every 5 minutes. It can be understood that they are trying their best to reduce the number of requests. However, this kind of processing method may not be done by the front-end alone. The implementation plan needs to be considered as a whole with the back-end interface.

Now let’s go back and look at the data caching method in code snippet 2. Now we no longer discuss the number of requests and traffic savings, let’s take a look at the front-end implementation to see if there is anything worth delving into. According to the processing method shown in code snippet 2, the original data is stored. When called again, showPage(datalist) needs to reconstruct the html structure based on the data again to display to the user, but we have done this process of creating the structure before. , is it possible to consider saving the structure directly when creating the structure for the first time? This can reduce the repeated calculations of js, especially when the structure is more complex, it is worth considering. Think about it again, this structure has been created on the page before. Destroying it and creating a new structure when turning the page also consumes resources. Can you create it for the first time and not destroy it when turning the page? Just pass it. Control the CSS style to hide it, and when you turn the page repeatedly, you can only control the display or hiding of each other between these created structures?

Finally, the method discussed here may not be applicable to all scenarios, but either There will be some inspiration, you can try one or two of them at the appropriate time. At the same time, if the ideas are spread out, it may not only be applied to refresh-free paging. Let’s discuss it together here.

The above is what I compiled for everyone. I hope it will be helpful to everyone in the future.

Related articles:

Ajax image upload based on firefox

Ajax loading external page pop-up layer effect implementation method

Ajax cross-domain (same basic domain name) form submission method

The above is the detailed content of Performance optimization method for Ajax non-refresh paging. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn