Home  >  Article  >  Database  >  What are the troubleshooting and solutions for BigKey in Redis?

What are the troubleshooting and solutions for BigKey in Redis?

王林
王林forward
2023-05-31 15:59:501483browse

Summary

Redis is an in-memory database with strong performance, but during use, we may encounter the Big Key problem. This problem is a certain key in Redis. The value is too large, so the Big Key problem is essentially a Big Value problem, resulting in Redis performance degradation or crash.

Introduction to Big Key issues

In Redis, each key has a corresponding value. If the value of a key is too large, it will cause the performance of Redis to decline or crash. Compared with Metaphysics is more metaphysical, because Redis needs to load all big keys into memory, which will occupy a lot of memory space and reduce the response speed of Redis. This problem is called the Big Key problem. Don't underestimate this problem, it can turn your Redis into a "turtle" instantly. Due to the single-threaded nature of Redis, operating Big Key is usually time-consuming, which means that the possibility of blocking Redis is greater, which will cause The client is blocked or causes failover, which may lead to "slow query".

Generally speaking, the following two situations are called large keys:

  • The value corresponding to the String type key exceeds 10 MB.

  • Collection types such as list, set, hash, zset, etc., the number of collection elements exceeds 5000.

The above criteria for judging Big Key are not the only ones, just a rough standard. It needs to be judged according to the specific application scenario whether it is a Big Key in actual business development. If the operation of a certain key causes the request response time to slow down, the key can be determined as a Big Key.

In Redis, large keys are usually caused by the following reasons:

  • The size of the object after serialization is too large Large

  • Containers that store large amounts of data, such as sets, lists, etc.

  • Large data structures, such as bitmap, hyperloglog, etc.

If these large keys are not processed in time, they will gradually consume the memory resources of the Redis server and eventually cause Redis to crash.

Big Key Problem Troubleshooting

When Redis performance drops sharply, it is most likely caused by the existence of a large key. When troubleshooting big key problems, you can consider the following methods:

Use the BIGKEYS command

The BIGKEYS command that comes with Redis can query the information of all keys in the current Redis, and query the entire database Perform statistical analysis on the size of key-value pairs, for example, count the number of key-value pairs and average size of each data type. In addition, after this command is executed, the information of the largest bigkey in each data type will be output. For the String type, the byte length of the largest bigkey will be output. For the collection type, the number of elements of the largest bigkey will be output.

BIGKEYSThe command will scan the entire database. This command itself will block Redis, find all the big keys, and return them to the client in the form of a list.

The command format is as follows:

$ redis-cli --bigkeys

The return example is as follows:

# Scanning the entire keyspace to find biggest keys as well as
# average sizes per key type.  You can use -i 0.1 to sleep 0.1 sec
# per 100 SCAN commands (not usually needed).

[00.00%] Biggest string found so far 'a' with 3 bytes
[05.14%] Biggest list   found so far 'b' with 100004 items
[35.77%] Biggest string found so far 'c' with 6 bytes
[73.91%] Biggest hash   found so far 'd' with 3 fields

-------- summary -------

Sampled 506 keys in the keyspace!
Total key length in bytes is 3452 (avg len 6.82)

Biggest string found 'c' has 6 bytes
Biggest   list found 'b' has 100004 items
Biggest   hash found 'd' has 3 fields

504 strings with 1403 bytes (99.60% of keys, avg size 2.78)
1 lists with 100004 items (00.20% of keys, avg size 100004.00)
0 sets with 0 members (00.00% of keys, avg size 0.00)
1 hashs with 3 fields (00.20% of keys, avg size 3.00)
0 zsets with 0 members (00.00% of keys, avg size 0.00)

It should be noted that since the BIGKEYS command needs to scan the entire database, it may It will cause a certain burden on the Redis instance. Before executing this command, please ensure that your Redis instance has sufficient resources to handle it. It is recommended to execute from the slave node.

Debug Object

If we find the Big Key, we need to analyze it further. We can use the command debug object key to view the detailed information of a key, including the value size of the key, etc. At this time, you can "peep" inside Redis to see which key is too large.

When the key exists, the Debug Object command provides information about the key and is a debugging command. When key does not exist, an error is returned.

redis 127.0.0.1:6379> DEBUG OBJECT key
Value at:0xb6838d20 refcount:1 encoding:raw serializedlength:9 lru:283790 lru_seconds_idle:150

redis 127.0.0.1:6379> DEBUG OBJECT key
(error) ERR no such key

serializedlength indicates the number of bytes after serialization of the value corresponding to the key

memory usage

Before Redis 4.0, the key memory can only be estimated through the DEBUG OBJECT command Use (field serializedlength), but the DEBUG OBJECT command is incorrect.

For version 4.0 and above, we can use the memory usag command.

The memory usage command is very simple to use, just press the memory usage key name; if the current key exists, the actual memory usage estimate of the value of the key is returned; if the key does not exist, nil is returned.

127.0.0.1:6379> set k1 value1
OK
127.0.0.1:6379> memory usage k1    //这里k1 value占用57字节内存
(integer) 57
127.0.0.1:6379> memory usage aaa  // aaa键不存在,返回nil.
(nil)

For types other than String type, the memory usage command adopts sampling method. By default, 5 elements are sampled, so the calculation is an approximate value. We can also specify the number of samples.

Example description: Generate a hash key of 1 million fields: hkey. The value length of each field is a random value from 1 to 1024 bytes.

127.0.0.1:6379> hlen hkey    // hkey有100w个字段,每个字段的value长度介于1~1024个字节
(integer) 1000000
127.0.0.1:6379> MEMORY usage hkey   //默认SAMPLES为5,分析hkey键内存占用521588753字节
(integer) 521588753
127.0.0.1:6379> MEMORY usage hkey SAMPLES  1000 //指定SAMPLES为1000,分析hkey键内存占用617977753字节
(integer) 617977753
127.0.0.1:6379> MEMORY usage hkey SAMPLES  10000 //指定SAMPLES为10000,分析hkey键内存占用624950853字节
(integer) 624950853

To obtain a more accurate memory value of key, specify a larger sampling number. However, the larger the number of samples, the more CPU time is occupied.

redis-rdb-tools

redis-rdb-tools is a python tool for parsing rdb files. When analyzing memory, we mainly use it to generate memory snapshots. You can convert the rdb snapshot file into a CSV or JSON file and import it into MySQL to generate reports for analysis.

Install using PYPI

pip install rdbtools

Generate memory snapshot

rdb -c memory dump.rdb > memory.csv

In the generated CSV file there are the following columns:

  • database key在Redis的db

  • type key类型

  • key key值

  • size_in_bytes key的内存大小

  • encoding value的存储编码形式

  • num_elements key中的value的个数

  • len_largest_element key中的value的长度

可以在MySQL中新建表然后导入进行分析,然后可以直接通过SQL语句进行查询分析。

CREATE TABLE `memory` (
     `database` int(128) DEFAULT NULL,
     `type` varchar(128) DEFAULT NULL,
     `KEY` varchar(128),
     `size_in_bytes` bigint(20) DEFAULT NULL,
     `encoding` varchar(128) DEFAULT NULL,
     `num_elements` bigint(20) DEFAULT NULL,
     `len_largest_element` varchar(128) DEFAULT NULL,
     PRIMARY KEY (`KEY`)
 );

例子:查询内存占用最高的3个 key

mysql> SELECT * FROM memory ORDER BY size_in_bytes DESC LIMIT 3;
+----------+------+-----+---------------+-----------+--------------+---------------------+
| database | type | key | size_in_bytes | encoding  | num_elements | len_largest_element |
+----------+------+-----+---------------+-----------+--------------+---------------------+
|        0 | set  | k1  |        624550 | hashtable |        50000 | 10                  |
|        0 | set  | k2  |        420191 | hashtable |        46000 | 10                  |
|        0 | set  | k3  |        325465 | hashtable |        38000 | 10                  |
+----------+------+-----+---------------+-----------+--------------+---------------------+
3 rows in set (0.12 sec)

Big Key问题解决思路

当发现存在大key问题时,我们需要及时采取措施来解决这个问题。下面列出几种可行的解决思路:

分割大key

将Big Key拆分成多个小的key。这个方法比较简单,但是需要修改应用程序的代码。虽然有些费力,但将一个大蛋糕切成小蛋糕可以解决问题。

或者尝试将Big Key转换成Redis的数据结构。例如,可以使用哈希表、列表或集合等数据结构将“Big Key”进行转换。

对象压缩

若大key的大小源于对象序列化后的体积巨大,我们可思考运用压缩算法来缩小对象的尺寸。Redis自身支持多种压缩算法,例如LZF、Snappy等。

直接删除

如果你所用的Redis版本是4.0或更高版本,你可以使用unlink命令进行异步删除。4.0以下的版本 可以考虑使用 scan ,分批次删除。

无论采用哪种方法,都需要注意以下几点:

  • 避免使用过大的value。如果需要存储大量的数据,可以将其拆分成多个小的value。就像是吃饭一样,一口一口的吃,不要贪多嚼不烂。

  • 避免使用不必要的数据结构。如果只需要保存一个字符串,应该避免使用像Hash或List这样的数据结构。

  • 定期清理过期的key。当Redis中存在大量过期的key时,会导致Redis性能下降。就像是家里的垃圾,需要定期清理。

  • 对象压缩

The above is the detailed content of What are the troubleshooting and solutions for BigKey in Redis?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:yisu.com. If there is any infringement, please contact admin@php.cn delete