search
HomeDatabaseRedisLet's talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

How to deal with the hot key problem in Redis? The following article will introduce to you common solutions to the Redis cache hot key problem. I hope it will be helpful to you!

Let's talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

When doing some C-side business, it is inevitable to introduce a first-level cache to replace the pressure on the database and reduce the business response time. In fact, each time a middleware is introduced to solve the problem At the same time, it will inevitably bring about many new issues that need attention, such as how to achieve cache consistency mentioned in the previous article "Database and Cache Consistency in Practice". In fact, there will be some other problems, such as hot keys, large keys, etc. that may be caused when using Redis as a first-level cache. In this article, we will discuss the hot key(hot key) issue and how to reasonably Solve the hot key problem.

Background

HotkeyWhat is the problem and how is it caused?

Generally speaking, the cache Redis we use is a multi-node cluster version. When reading and writing a certain key, the corresponding slot will be calculated based on the hash of the key, and it can be found based on this slot. The corresponding shard (a set of redis clusters composed of one master and multiple slaves) is used to access the K-V. However, in the actual application process, for some specific businesses or some specific periods of time (such as product flash sales activities in e-commerce businesses), a large number of requests may occur to access the same key. All requests (and the read-write ratio of such requests is very high) will fall on the same redis server, and the load on the redis will be seriously increased. At this time, adding new redis instances to the entire system will be of no use, because according to the hash algorithm, Requests for the same key will still fall on the same new machine, which will still become the system bottleneck2, and even cause the entire cluster to crash. If the value of this hotspot key is relatively large, it will also cause the network card to reach the bottleneck. This problem Known as the "hot key" problem. [Related recommendations: Redis Video Tutorial]

As shown in Figures 1 and 2 below, they are the normal redis cluster cluster and the redis cluster key access using a layer of proxy proxy respectively.

Lets talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

Lets talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

As mentioned above, hot keys will bring extremely high load pressure to a small number of nodes in the cluster. If not handled correctly , then these nodes may be down, which will affect the operation of the entire cache cluster. Therefore, we must discover hot keys and solve hot key problems in time.

1. Hot key detection

Hot key detection, seeing some significant impacts caused by the dispersion of the redis cluster and hot keys, we can do it through a rough and fine thinking process A solution for hotspot key detection.

1.1 QPS monitoring of each slot in the cluster

The most obvious impact of hot key is the traffic distribution under the premise that the QPS in the entire redis cluster is not that large When it comes to the problem of uneven slots in the cluster, the first thing we can think of is to monitor the traffic in each slot. After reporting, compare the traffic of each slot, and then we can find the specific slots affected when the hot key appears. . Although this monitoring is the most convenient, the granularity is too coarse. It is only suitable for early cluster monitoring solutions and is not suitable for scenarios where hot keys are accurately detected.

1.2 The proxy mechanism of proxy is used as the entire traffic entrance statistics

If we are using the redis cluster proxy mode in Figure 2, since all requests will go to the proxy first Go to the specific slot node, then the detection statistics of this hot key can be done in the proxy. In the proxy, based on the time sliding window, each key is counted, and then the number that exceeds the corresponding threshold is counted. key. In order to prevent too many redundant statistics, you can also set some rules to only count keys corresponding to the prefix and type. This method requires at least a proxy mechanism and has requirements for the redis architecture.

1.3 redis LFU-based hotspot key discovery mechanism

Versions of redis 4.0 or above support the LFU-based hotspot key discovery mechanism on each node, use redis-cli –hotkeys Just add the –hotkeys option when executing redis-cli. You can use this command regularly on the node to discover the corresponding hotspot key.

Lets talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

As shown below, you can see the execution results of redis-cli –hotkeys and the statistics of hot keys. The execution time of this command is longer. , you can set up scheduled execution to collect statistics.

1.4 Detection based on Redis client

Since the redis command is issued from the client every time, based on this we can perform statistics and counting in some codes of the redis client. Each client makes statistics based on the time sliding window. After exceeding a certain threshold, the statistics are reported to the server, and then the server sends them to each client uniformly, and configures the corresponding expiration time.

This method looks more beautiful, but in fact it is not so suitable in some application scenarios, because the transformation on the client side will bring greater impact to the running process. Memory overhead, more directly speaking, for automatic memory management languages ​​​​such as Java and goLang, objects will be created more frequently, thus triggering gc and causing the interface response time to increase. This is something that is not easy to predict. .

In the end, you can make corresponding choices through the infrastructure of each company.

2. Hot key solution

Through the above methods, we have detected the corresponding hot key or hot slot, then we need to solve the corresponding hot key problem. There are several ideas for solving hot keys. Let’s go through them one by one.

2.1 Limit the current for a specific key or slot

The simplest and crudest way is to limit the current for a specific slot or hot key. This solution is obviously suitable for It is a loss for business, so it is recommended to only use specific current limiting when there is an online problem and the loss needs to be stopped.

2.2 Use the second-level (local) cache

Local cache is also the most commonly used solution. Since our first-level cache cannot withstand such a heavy pressure, Just add a second level cache. Since each request is issued by the service, it is perfect to add this second-level cache to the service side. Therefore, every time the server obtains the corresponding hot key, it can use the local cache to store a copy until the local cache expires. Then request again to reduce the pressure on the redis cluster. Taking java as an example, guavaCache is a ready-made tool. The following example:

    //本地缓存初始化以及构造
    private static LoadingCache<String, List<Object>> configCache
            = CacheBuilder.newBuilder()
            .concurrencyLevel(8)  //并发读写的级别,建议设置cpu核数
            .expireAfterWrite(10, TimeUnit.SECONDS)  //写入数据后多久过期
            .initialCapacity(10) //初始化cache的容器大小
            .maximumSize(10)//cache的容器最大
            .recordStats()
            // build方法中可以指定CacheLoader,在缓存不存在时通过CacheLoader的实现自动加载缓存
            .build(new CacheLoader<String, List<Object>>() {
                @Override
                public List<Object> load(String hotKey) throws Exception {
                    
                }
            });
    
    //本地缓存获取
    Object result = configCache.get(key);

The biggest impact of local cache on us is the problem of data inconsistency. How long we set the cache expiration time will lead to the longest online data inconsistency problem. This cache time You need to measure your own cluster pressure and the maximum inconsistent time accepted by the business.

2.3 Key removal

How to ensure that hot key problems will not occur while ensuring data consistency as much as possible? Removing the key is also a good solution.

When we put it into the cache, we split the cache key of the corresponding business into multiple different keys. As shown in the figure below, we first split the key into N parts on the side of the update cache. For example, if a key is named "good_100", then we can split it into four parts, "good_100_copy1", "good_100_copy2", " good_100_copy3", "good_100_copy4", these N keys need to be modified every time they are updated or added. This step is to remove the key.

For the service side, we need to find ways to make the traffic we access even enough, and how to add suffixes to the hot keys we are about to access. There are several ways to do a hash based on the IP or mac address of the machine, and then take the remainder of the value and the number of split keys, and finally decide what kind of key suffix it will be spliced ​​into, so as to which machine it will be hit to; one when the service starts The random number is the remainder of the number of split keys.

Lets talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

2.4 Another idea of ​​local cache configuration center

For those who are familiar with the microservice configuration center, our The idea can be changed to the consistency of the configuration center. Take nacos as an example. How does it achieve distributed configuration consistency and respond quickly? Then we can compare the cache analogy to configuration and do it like this.

Long polling Localization configuration. First, all configurations will be initialized when the service starts, and then long polling will be started regularly to check whether the current service monitoring configuration has changed. If there is a change, the long polling request will return immediately to update the local configuration; if there is no change, for All business codes use local memory cache configuration. This ensures the timeliness and consistency of distributed cache configuration.

2.5 Other plans that can be made in advance

Each of the above solutions is relatively independent to solve the hot key problem, so if we really face business demands, we will actually There is a long time to consider the overall scheme design. For hot key issues caused by some extreme flash sales scenarios, if we have enough budget, we can directly isolate the service business and the redis cache cluster to avoid affecting normal business, and at the same time, we can temporarily adopt better disaster recovery and Current limiting measures.

Some integrated solutions

There are currently many relatively complete application-level solutions for hotKey on the market, among which JD.com has open source hotkey tools in this regard The principle is to make insights on the client side, and then report the corresponding hotkey. After the server detects it, it will send the corresponding hotkey to the corresponding server for local caching, and this local cache will be updated synchronously after the remote corresponding key is updated. Already It is currently a relatively mature automatic hot key detection and distributed consistency cache solution, Jingdong retail hot key.

Lets talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

Summary

The above are some solutions on how to deal with hot keys that the author roughly understands or has practiced, starting from the discovery of hot keys To solve the two key problems of hot keys. Each solution has advantages and disadvantages, such as business inconsistency, difficulty in implementation, etc. You can make corresponding adjustments and changes based on the current characteristics of your own business and the current company's infrastructure.

For more programming-related knowledge, please visit: Introduction to Programming! !

The above is the detailed content of Let's talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:掘金社区. If there is any infringement, please contact admin@php.cn delete
es和redis区别es和redis区别Jul 06, 2019 pm 01:45 PM

Redis是现在最热门的key-value数据库,Redis的最大特点是key-value存储所带来的简单和高性能;相较于MongoDB和Redis,晚一年发布的ES可能知名度要低一些,ES的特点是搜索,ES是围绕搜索设计的。

一起来聊聊Redis有什么优势和特点一起来聊聊Redis有什么优势和特点May 16, 2022 pm 06:04 PM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了关于redis的一些优势和特点,Redis 是一个开源的使用ANSI C语言编写、遵守 BSD 协议、支持网络、可基于内存、分布式存储数据库,下面一起来看一下,希望对大家有帮助。

实例详解Redis Cluster集群收缩主从节点实例详解Redis Cluster集群收缩主从节点Apr 21, 2022 pm 06:23 PM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了Redis Cluster集群收缩主从节点的相关问题,包括了Cluster集群收缩概念、将6390主节点从集群中收缩、验证数据迁移过程是否导致数据异常等,希望对大家有帮助。

Redis实现排行榜及相同积分按时间排序功能的实现Redis实现排行榜及相同积分按时间排序功能的实现Aug 22, 2022 pm 05:51 PM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了Redis实现排行榜及相同积分按时间排序,本文通过实例代码给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,希望对大家有帮助。

详细解析Redis中命令的原子性详细解析Redis中命令的原子性Jun 01, 2022 am 11:58 AM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了关于原子操作中命令原子性的相关问题,包括了处理并发的方案、编程模型、多IO线程以及单命令的相关内容,下面一起看一下,希望对大家有帮助。

一文搞懂redis的bitmap一文搞懂redis的bitmapApr 27, 2022 pm 07:48 PM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了bitmap问题,Redis 为我们提供了位图这一数据结构,位图数据结构其实并不是一个全新的玩意,我们可以简单的认为就是个数组,只是里面的内容只能为0或1而已,希望对大家有帮助。

实例详解Redis实现排行榜及相同积分按时间排序功能的实现实例详解Redis实现排行榜及相同积分按时间排序功能的实现Aug 26, 2022 pm 02:09 PM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了Redis实现排行榜及相同积分按时间排序,本文通过实例代码给大家介绍的非常详细,下面一起来看一下,希望对大家有帮助。

一起聊聊Redis实现秒杀的问题一起聊聊Redis实现秒杀的问题May 27, 2022 am 11:40 AM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了关于实现秒杀的相关内容,包括了秒杀逻辑、存在的链接超时、超卖和库存遗留的问题,下面一起来看一下,希望对大家有帮助。

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Atom editor mac version download

Atom editor mac version download

The most popular open source editor