Home  >  Article  >  Database  >  Let's analyze Redis cache consistency, cache penetration, cache breakdown and cache avalanche issues together

Let's analyze Redis cache consistency, cache penetration, cache breakdown and cache avalanche issues together

WBOY
WBOYforward
2022-05-19 10:12:202142browse

This article brings you relevant knowledge about Redis, which mainly introduces the issues related to cache consistency, cache penetration, cache breakdown, cache avalanche and write synchronization of cached data. Let’s take a look at the issue of DB consistency. I hope it will be helpful to everyone.

Let's analyze Redis cache consistency, cache penetration, cache breakdown and cache avalanche issues together

Related recommendations: "Analysis of hot key storage problems in Redis and talk about solutions to cache exceptions"

(1) Cache invalidation consistency problem

The general way to use cache is: read the cache first, and if it does not exist, read it from the DB, and then The results are written to the cache; the next time the data is read, the data can be obtained directly from the cache. [Related recommendations: Redis Video Tutorial]

Data modification is to directly invalidate the cached data, and then modify the DB content to avoid that the DB modification is successful, but the cached data is not cleared due to network or other problems. , resulting in dirty data. But this still cannot avoid the generation of dirty data. In a concurrent scenario: Assume that the business has a large number of read and modify requests for the data Key:Hello Value:World. Thread A reads Key:Hello from OCS, gets the Not Found result, starts requesting data from DB, and gets the data Key:Hello Value:World; next, it prepares to write this data to OCS, but before writing to OCS (network, Waiting for the CPU may cause the processing speed of thread A to slow down.) Another thread B requests to modify the data Key:Hello Value:OCS and first performs the invalidation cache action (because thread B does not know whether this data exists, so it directly performs the invalidation operation). OCS successfully processed the invalid request. Return to thread A to continue writing OCS and write Key:Hello Value:World into the cache. Thread A's task ends; thread B also successfully modified the DB data content to Key:Hello Value:OCS. In order to solve this problem, OCS has expanded the Memcached protocol (public cloud will soon support it) and added the deleteAndIncVersion interface. This interface does not actually delete the data, but labels the data to indicate that it has expired, and increases the data version number; if the data does not exist, NULL is written, and a random data version number is also generated. OCS writing supports atomic comparison of version numbers: assuming the incoming version number is consistent with the data version number saved by OCS or the original data does not exist, writing is allowed, otherwise modification is refused.

Back to the scene just now: Thread A reads Key:Hello from OCS, gets the Not Found result, starts requesting data from DB, and gets the data Key:Hello Value:World; then prepares to write to OCS For this piece of data, the version number information defaults to 1; before A writes to OCS, another thread B initiates an action to modify the data Key:Hello Value:OCS. It first performs the delete cache action. OCS successfully handles the deleteAndIncVersion request and generates a random version. No. 12345 (agreed to be greater than 1000). Return to thread A and continue writing to OCS, requesting to write Key:Hello Value:World. At this time, the cache system finds that the incoming version number information does not match (1! = 12345), the writing fails, and the task of thread A ends. ;Thread B also successfully modified the DB data content to Key:Hello Value:OCS.

At this time, the data in OCS is Key:Hello Value:NULL Version:12345; the data in DB is Key:Hello Value:OCS. In subsequent read tasks, the data in DB will be tried again to write to in OCS.

(2) The write synchronization of cached data and the consistency problem with DB

As the website scale grows and reliability improves, it will face the deployment of multiple IDCs. Each IDC has an independent DB and cache system, and cache consistency has become a prominent issue.

First of all, in order to ensure high efficiency, the cache system will prevent disk IO, even if it is writing BINLOG; of course, for the sake of performance, the cache system can only delete synchronously and not write synchronously, so the cache synchronization will generally take precedence over the DB synchronization arrival (After all, the cache system is much more efficient), then there will be a scenario where there is no data in the cache and old data in the DB. At this time, there is a business request for data, and the read cache is Not Found. The old data read from the DB and loaded into the cache is still old data. When the DB data synchronization arrives, only the DB is updated, and the cached dirty data cannot be cleared.

Lets analyze Redis cache consistency, cache penetration, cache breakdown and cache avalanche issues together

As can be seen from the above situation, the root cause of the inconsistency is that heterogeneous systems cannot synchronize collaboratively. It cannot guarantee that DB data is synchronized first and cached data is synchronized later. So we need to consider how the cache system waits for DB synchronization, or can the two share a synchronization mechanism? Cache synchronization also relies on DB BINLOG which is a feasible solution.

The DB in IDC1 is synchronized to the DB in IDC2 through BINLOG. In this case, IDC2-DB data modification will also generate its own BINLOG. The cached data synchronization can be performed through IDC2-DB BINLOG. After the cache synchronization module analyzes the BINLOG, it invalidates the corresponding cache key and changes the synchronization from parallel to serial, ensuring the order.

(3) Cache penetration (DB suffered unnecessary query traffic)

Method 1: It is a Bloom filter. It is an extremely space-efficient probabilistic algorithm and data structure, used to determine whether an element is in a set (similar to Hashset). Its core is a long binary vector and a series of hash functions. Implement bloom filter using Google's guava. 1) There is a miscalculation rate. As the number of stored elements increases, the miscalculation rate also increases. 2) Under normal circumstances, elements cannot be deleted from the Bloom filter. 3) The process of determining the array length and the number of hash functions is complex, and the distribution What are the usage scenarios of Long filter? 1) Spam address filtering (the number of addresses is huge) 2) Crawler URL address deduplication 3) Solve the cache breakdown problem

Method 2: Store empty results and set the time for empty results

(4) Cache avalanche (the cache is set to the same expiration time, causing a DB flood)

Method 1: Most system designers consider using locks or queues to ensure cache single Threads (processes) write to avoid a large number of concurrent requests falling on the underlying storage system when they fail

Method 2: Random value of failure time

(5) Cache breakdown (hot spot) Key, a small avalanche caused by a large number of concurrent read requests)

When the cache expires at a certain point in time, there happens to be a large number of concurrent requests for this Key at this point in time. These requests If the cache is found to have expired, the data will usually be loaded from the back-end DB and reset to the cache. At this time, large concurrent requests may instantly overwhelm the back-end DB

Method 1: 1. Use distributed cache For the supported mutex key, set a mutex key. When the operation returns successfully, the load DB operation is performed and the cache is set back. That is, load DB will only be processed by one thread.

Method 2: Use mutex key in advance: Set a timeout value (timeout1) inside value, timeout1 is smaller than the actual memcache timeout (timeout2). When timeout1 is read from cache When you find that it has expired, immediately extend timeout1 and reset it to the cache. Then load the data from the database and set it to the cache. This increases the intrusion of business code and increases the complexity of coding

Method 3 : "Never expires": From the perspective of redis, there is indeed no expiration time set, which ensures that there will be no hotspot key expiration problem, that is, "physical" does not expire. From a functional point of view, if it does not expire, then it does not Is it static? So we store the expiration time in the value corresponding to the key. If it is found that it is about to expire, the cache is constructed through a background asynchronous thread, which is a "logical" expiration

(6) Common cache full and data loss problems in cache systems

Need to be based on specific business analysis. Usually we use the LRU strategy to handle overflow, and Redis's RDB and AOF persistence strategies to ensure certain situations. Data security under.

For more programming-related knowledge, please visit:Programming Video!!

The above is the detailed content of Let's analyze Redis cache consistency, cache penetration, cache breakdown and cache avalanche issues together. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jianshu.com. If there is any infringement, please contact admin@php.cn delete