Home  >  Article  >  Database  >  Example analysis of Redis caching problem

Example analysis of Redis caching problem

WBOY
WBOYforward
2023-05-29 21:50:41787browse

1. Application of Redis cache

In our actual business scenarios, Redis is generally used in conjunction with other databases to reduce the pressure on back-end databases, such as relational databases. Used with database MySQL.

Redis will cache frequently queried data in MySQL, such as hotspot data, so that when users come to access, they do not need to go to MySQL. Instead of querying, the cached data in Redis is directly obtained, thereby reducing the reading pressure on the back-end database.

If the data queried by the user is not available in Redis, the user's query request will be transferred to the MySQL database. When MySQL returns the data to the client, the data will be cached in Redis at the same time., so that when the user reads again, the data can be obtained directly from Redis. The flow chart is as follows:

Example analysis of Redis caching problem

When using Redis as a cache database, we will inevitably face three common caching problems

  • Cache Penetration

  • Cache Penetration

  • Cache Avalanche

2. Cache Penetration

2.1 Introduction

Cache penetration means that when the user queries a certain data, the data does not exist in Redis, that is, the cache does not hit. At this time, the query request will be transferred to the persistence layer database MySQL, and it is found that the data does not exist in MySQL either. , MySQL can only return an empty object, indicating that the query failed. If there are many such requests, or users use such requests to conduct malicious attacks, it will put great pressure on the MySQL database and even collapse. This phenomenon is called cache penetration.

Example analysis of Redis caching problem

2.2 Solution

Cache empty objects

When MySQL When an empty object is returned, Redis caches the object and sets an expiration time for it. When the user initiates the same request again, an empty object will be obtained from the cache. The user's request is blocked in the cache layer, thus protecting the back-end database. However, this approach There are also some problems. Although the request cannot enter MSQL, this strategy will occupy Redis cache space.

Example analysis of Redis caching problem

Bloom filter

First store all keys of hotspot data that users may access In the Bloom filter (also called cache preheating) , when a user makes a request, it will first go through the Bloom filter. The Bloom filter will determine whether the requested key exists . If If it does not exist, then the request will be rejected directly, otherwise the query will continue to be executed, first go to the cache to query, if the cache does not exist, then go to the database to query. Compared with the first method, using the Bloom filter method is more efficient and practical. The process diagram is as follows:

Example analysis of Redis caching problem

Cache preheating is the process of loading relevant data into the Redis cache system in advance before the system starts. . This avoids loading data when the user requests it.

2.3 Comparison of solutions

Both solutions can solve the problem of cache penetration, but their usage scenarios are different:

Cache empty objects: suitable for scenarios where the number of keys for empty data is limited and the probability of repeated key requests is high.


Bloom filter: suitable for scenarios where the keys of empty data are different and the probability of repeated key requests is low.

3. Cache breakdown

3.1 Introduction

Cache breakdown means that the data queried by the user does not exist in the cache , but it exists in the back-end database. The reason for this phenomenon is generally caused by the expiration of the key in the cache. For example, a hot data key receives a large number of concurrent accesses all the time. If the key suddenly fails at a certain moment, a large number of concurrent requests will enter the back-end database, causing its pressure to increase instantly. This phenomenon is called cache breakdown.

3.2 Solution

Change the expiration time

Set hotspot data to never expire.

Distributed lock

Adopt the distributed lock method to redesign the use of cache. The process is as follows:

  • Locking: When we query data through key, we first query the cache. If not, we lock it through distributed lock. The first process to obtain the lock Enter the back-end database query and buffer the query results to Redis.

  • Unlocking: When other processes find that the lock is occupied by a certain process, they enter the waiting state. After unlocking, other processes access the cached key in turn. .

Example analysis of Redis caching problem

##3.3 Comparison of solutions

Never expires: This solution does not set the real As for the expiration time, there are actually no series of hazards caused by hot keys, but there will be data inconsistencies, and the code complexity will increase.

Mutex lock: This solution is relatively simple, but there are certain hidden dangers. If there is a problem in the cache building process or it takes a long time, there may be deadlock and thread pool blocking. Risky, but this method can better reduce the back-end storage load and achieve better consistency.

4. Cache avalanche

4.1 Introduction

Cache avalanche means that a large number of keys in the cache expire at the same time, and at this time the data The number of visits is very large, which leads to a sudden increase in pressure on the back-end database and may even cause it to crash. This phenomenon is called a cache avalanche. It is different from cache breakdown. Cache breakdown occurs when a certain hot key suddenly expires when the amount of concurrency is particularly large, while cache avalanche occurs when a large number of keys expire at the same time, so they are not of the same order of magnitude at all.

Example analysis of Redis caching problem

4.2 Solution

Handling expiration

In order to reduce cache breakdown and avalanche problems caused by a large number of keys expiring at the same time, a strategy of never expiring hotspot data can be adopted, which is similar to cache avalanche. In addition, in order to prevent keys from expiring at the same time, you can set a random expiration time for them.

redis high availability

One Redis may hang due to an avalanche, so you can add a few more Redis, build a cluster, if one hangs up, the others can continue to work.

The above is the detailed content of Example analysis of Redis caching problem. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:yisu.com. If there is any infringement, please contact admin@php.cn delete