Home > Article > Operation and Maintenance > How ATS implements caching strategies to increase dynamic service throughput
Let’s first take a look at the traffic graph immediately after the policy adjustment:
## In order to improve the user experience, increase the cache amplification ratio, and at the same time In order to prevent customers from reporting problems, we put a lot of effort into caching. We separated large files from small files, and separated dynamic content and static content in small files. Basically everything that could be stored was saved, and only dynamic content was always stored. I haven’t started yet. According to the previous strategy, dynamic content is directly proxied, with 1:1 entry and exit. However, some bureaus just won’t stop and insist on reaching a certain amplification ratio. If there is no discount, let’s use the knife in dynamic content. Before going under the knife, I did an analysis first and did a lot of testing on the dynamic content that can be stored and the ATS caching strategy, and I benefited a lot. The current caching strategy of ATS fully complies with the HTTP protocol and adopts the most conservative caching method, that is, only information with a clear life cycle cache header is stored. Dynamic, cookie, authorization, and no-cache are not stored. The corresponding configuration in ats The parameters will not be written. In order to ensure quality, we directly skip the dynamic content with cookies and authorization. The reason is that the risk is too great. The remaining categories can be tried: 1. Dynamic URL images with clear life cycle headers and other content; (we assume that the header information of the website is trustworthy) 2. Static url images and dynamic url images without clear header life cycle headers, including those without any information or only last-modified headers pictures and other information. For the first category, it is easy to handle. ats has corresponding parameters, just open it: proxy.config.http.cache.cache_urls_that_look_dynamic INT 1For the first category Category 2 is a technical matter to deal with. First of all, the necessary conditions for the header information online are: proxy.config.http.cache.required_headersINT 2 Only let go of this Category 2 can be included only by the restrictions, so setting it to 0 is the first necessity. After setting it, how to ensure its normal service? For example, the verification code has no header information when it is set. It is conservative. Our strategy is definitely to provide normal services, but if we let it go, it will definitely cause trouble. After analysis, ats uses the maximum and minimum cache time to ensure the cache time for content without header information. The two time parameters are as follows: proxy.config.http.cache.heuristic_min_lifetime INT 3600proxy.config.http.cache.heuristic_max_lifetime INT 864000For information with only the last-modified header, it is calculated through the aging factor. The aging factor parameters are as follows: proxy.config. http.cache.heuristic_lm_factor-v 0.1So I came up with the idea to save the content after it comes, but before each spit, let ATS send an IMS header information to the origin site to ask if there are any changes, because this header information is only Ask, how much traffic will not be occupied? If there is no change, TCP_REFRESH_HIT will be spit out to the user. Although the source is returned, the content is still spit out from the cache. If there is a change, TCP_REFRESH_MISS will be spit out to the user. The user will also get the latest content. , which will virtually increase part of the spit flow. But how to set the parameters? It suddenly occurred to me that I could set all the above parameters to 0, which theoretically achieved my goal. After saving it for the first time, I started asking the IMS header back to the source from the second time, and immediately found a test environment for testing. It was as expected. Likewise, when I was excited, I immediately updated the strategy online and monitored it through the traffic graph tool for an hour. The overall return to origin was reduced, but something strange also happened. I used tsar to see that the return to origin at certain moments was still the same. It's almost like vomiting, and after analyzing it with traffic_logstats -s, I found that there are a lot of ERR_CLIENT_ABORT, which is really terrible. This log is that the client actively disconnected the connection before the data was received after connecting. If it is less, it is normal. If it is more, it will be a problem. Yes, I found a 1M image with max-age for testing. I purged it first, then curled the connection and immediately disconnected it to create this error log. The second time I visited, it turned out to be TCP_HIT. I downloaded the local image and opened it normally. Damn it, it turns out that ATS will continue to download to the cache when dealing with this problem. Because the quality of these domain names is poor, the return traffic is sometimes very high. I continued to look for information on Google and found this article. Parameter: proxy.config.http.background_fill_completed_thresholdFLOAT 0.5The default is set to 0. This parameter means that the client suddenly disconnects and the download will continue when the download reaches a certain percentage. , otherwise it will be disconnected. Without thinking too much, I set it to 0.5, did a test, and then updated it immediately. The traffic became stable and the throughput also increased. Finally, it is a small success. The online parameters are not so stable. You still need to adjust the test according to the business situation in the future, but this is also part of the fun. All adjustments are a balance. The current adjustments: 1. Increased disk read and write IO; 2. Increased CPU load.The above is the detailed content of How ATS implements caching strategies to increase dynamic service throughput. For more information, please follow other related articles on the PHP Chinese website!