Home  >  Article  >  Database  >  Introduction to redis data elimination strategy

Introduction to redis data elimination strategy

尚
forward
2020-03-12 13:07:582064browse

Introduction to redis data elimination strategy

What this article talks about is that after redis sets the maximum memory, the size of the data set in the cache exceeds a certain proportion, and the elimination strategy implemented is not the strategy of deleting expired keys. Although both are very similar.

In redis, users are allowed to set the maximum memory size by configuring the maxmemory value in redis.conf to enable the memory elimination function, which is very useful when memory is limited.

Setting the maximum memory size can ensure that redis provides stable services to the outside world.

Recommended: redis tutorial

redis When the size of the memory data set increases to a certain size, the data elimination strategy will be implemented. Redis provides 6 data elimination strategies through maxmemory-policy setting strategy:

volatile-lru: Select the least recently used data from the data set (server.db[i].expires) with expiration time set for elimination.

volatile-ttl: Select the data that will expire from the data set (server.db[i].expires) that has set the expiration time.

volatile-random: Select the data that will expire from the set expiration time. Select any data from the data set (server.db[i].expires) for elimination

allkeys-lru: Select the least recently used data from the data set (server.db[i].dict) for elimination

allkeys-random: Select data arbitrarily from the data set (server.db[i].dict) to eliminate

no-enviction (eviction): prohibit the eviction of data

redis After it is determined to expel a key-value pair, the data will be deleted and the data change message will be published to the local (AOF persistence) and slave (master-slave connection)

LRU data elimination mechanism

The lru counter server.lrulock is saved in the server configuration and will be updated regularly (redis timer program serverCorn()). The value of server.lrulock is calculated based on server.unixtime.

In addition, it can be found from struct redisObject that each redis object will set the corresponding lru. It is conceivable that redisObject.lru will be updated every time the data is accessed.

The LRU data elimination mechanism is as follows: randomly select several key-value pairs in the data set, and remove the key-value pair with the largest lru among them. Therefore, you will find that redis does not guarantee to obtain the least recently used (LRU) key-value pairs in all data sets, but only a few randomly selected key-value pairs.

// redisServer 保存了 lru 计数器

struct redisServer {

...

unsigned lruclock:22; /* Clock incrementing every minute, for LRU */

...

};

 

// 每一个 redis 对象都保存了 lru

#define REDIS_LRU_CLOCK_MAX ((1<<21)-1) /* Max value of obj->lru */

#define REDIS_LRU_CLOCK_RESOLUTION 10 /* LRU clock resolution in seconds */

typedef struct redisObject {

// 刚刚好 32 bits

 

// 对象的类型,字符串/列表/集合/哈希表

unsigned type:4;

// 未使用的两个位

unsigned notused:2; /* Not used */

// 编码的方式,redis 为了节省空间,提供多种方式来保存一个数据

// 譬如:“123456789” 会被存储为整数 123456789

unsigned encoding:4;

unsigned lru:22; /* lru time (relative to server.lruclock) */

 

// 引用数

int refcount;

 

// 数据指针

void *ptr;

} robj;

 

// redis 定时执行程序。联想:linux cron

int serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {

......

/* We have just 22 bits per object for LRU information.

* So we use an (eventually wrapping) LRU clock with 10 seconds resolution.

* 2^22 bits with 10 seconds resolution is more or less 1.5 years.

*

* Note that even if this will wrap after 1.5 years it&#39;s not a problem,

* everything will still work but just some object will appear younger

* to Redis. But for this to happen a given object should never be touched

* for 1.5 years.

*

* Note that you can change the resolution altering the

* REDIS_LRU_CLOCK_RESOLUTION define.

*/

updateLRUClock();

......

}

 

// 更新服务器的 lru 计数器

void updateLRUClock(void) {

server.lruclock = (server.unixtime/REDIS_LRU_CLOCK_RESOLUTION) &

REDIS_LRU_CLOCK_MAX;

}

TTL data elimination mechanism

The table in the redis data set data structure stores the expiration time of key-value pairs, that is, redisDb.expires. Similar to the LRU data elimination mechanism, the TTL data elimination mechanism is as follows: randomly select several key-value pairs from the expiration time table, and take out the key-value pair with the largest ttl and eliminate them. Similarly, you will find that redis is not guaranteed to obtain the fastest expiring key-value pairs in all expiration time tables, but only a few randomly selected key-value pairs.

Summary

When each redis service client executes a command, it will detect whether the memory used is excessive. If it exceeds the limit, the data will be eliminated.

// 执行命令

int processCommand(redisClient *c) {

......

// 内存超额

/* Handle the maxmemory directive.

*

* First we try to free some memory if possible (if there are volatile

* keys in the dataset). If there are not the only thing we can do

* is returning an error. */

if (server.maxmemory) {

int retval = freeMemoryIfNeeded();

if ((c->cmd->flags & REDIS_CMD_DENYOOM) && retval == REDIS_ERR) {

flagTransaction(c);

addReply(c, shared.oomerr);

return REDIS_OK;

}

}

......

}

 

// 如果需要,是否一些内存

int freeMemoryIfNeeded(void) {

size_t mem_used, mem_tofree, mem_freed;

int slaves = listLength(server.slaves);

 

// redis 从机回复空间和 AOF 内存大小不计算入 redis 内存大小

/* Remove the size of slaves output buffers and AOF buffer from the

* count of used memory. */

mem_used = zmalloc_used_memory();

 

// 从机回复空间大小

if (slaves) {

listIter li;

listNode *ln;

 

listRewind(server.slaves,&li);

while((ln = listNext(&li))) {

redisClient *slave = listNodeValue(ln);

unsigned long obuf_bytes = getClientOutputBufferMemoryUsage(slave);

if (obuf_bytes > mem_used)

mem_used = 0;

else

mem_used -= obuf_bytes;

}

}

// server.aof_buf && server.aof_rewrite_buf_blocks

if (server.aof_state != REDIS_AOF_OFF) {

mem_used -= sdslen(server.aof_buf);

mem_used -= aofRewriteBufferSize();

}

 

// 内存是否超过设置大小

/* Check if we are over the memory limit. */

if (mem_used <= server.maxmemory) return REDIS_OK;

 

// redis 中可以设置内存超额策略

if (server.maxmemory_policy == REDIS_MAXMEMORY_NO_EVICTION)

return REDIS_ERR; /* We need to free memory, but policy forbids. */

 

/* Compute how much memory we need to free. */

mem_tofree = mem_used - server.maxmemory;

mem_freed = 0;

while (mem_freed < mem_tofree) {

int j, k, keys_freed = 0;

 

// 遍历所有数据集

for (j = 0; j < server.dbnum; j++) {

long bestval = 0; /* just to prevent warning */

sds bestkey = NULL;

struct dictEntry *de;

redisDb *db = server.db+j;

dict *dict;

 

// 不同的策略,选择的数据集不一样

if (server.maxmemory_policy == REDIS_MAXMEMORY_ALLKEYS_LRU ||

server.maxmemory_policy == REDIS_MAXMEMORY_ALLKEYS_RANDOM)

{

dict = server.db[j].dict;

} else {

dict = server.db[j].expires;

}

 

// 数据集为空,继续下一个数据集

if (dictSize(dict) == 0) continue;

 

// 随机淘汰随机策略:随机挑选

/* volatile-random and allkeys-random policy */

if (server.maxmemory_policy == REDIS_MAXMEMORY_ALLKEYS_RANDOM ||

server.maxmemory_policy == REDIS_MAXMEMORY_VOLATILE_RANDOM)

{

de = dictGetRandomKey(dict);

bestkey = dictGetKey(de);

}

 

// LRU 策略:挑选最近最少使用的数据

/* volatile-lru and allkeys-lru policy */

else if (server.maxmemory_policy == REDIS_MAXMEMORY_ALLKEYS_LRU ||

server.maxmemory_policy == REDIS_MAXMEMORY_VOLATILE_LRU)

{

// server.maxmemory_samples 为随机挑选键值对次数

// 随机挑选 server.maxmemory_samples个键值对,驱逐最近最少使用的数据

for (k = 0; k < server.maxmemory_samples; k++) {

sds thiskey;

long thisval;

robj *o;

 

// 随机挑选键值对

de = dictGetRandomKey(dict);

 

// 获取键

thiskey = dictGetKey(de);

 

/* When policy is volatile-lru we need an additional lookup

* to locate the real key, as dict is set to db->expires. */

if (server.maxmemory_policy == REDIS_MAXMEMORY_VOLATILE_LRU)

de = dictFind(db->dict, thiskey);

o = dictGetVal(de);

 

// 计算数据的空闲时间

thisval = estimateObjectIdleTime(o);

 

// 当前键值空闲时间更长,则记录

/* Higher idle time is better candidate for deletion */

if (bestkey == NULL || thisval > bestval) {

bestkey = thiskey;

bestval = thisval;

}

}

}

 

// TTL 策略:挑选将要过期的数据

/* volatile-ttl */

else if (server.maxmemory_policy == REDIS_MAXMEMORY_VOLATILE_TTL) {

// server.maxmemory_samples 为随机挑选键值对次数

// 随机挑选 server.maxmemory_samples个键值对,驱逐最快要过期的数据

for (k = 0; k < server.maxmemory_samples; k++) {

sds thiskey;

long thisval;

 

de = dictGetRandomKey(dict);

thiskey = dictGetKey(de);

thisval = (long) dictGetVal(de);

 

/* Expire sooner (minor expire unix timestamp) is better

* candidate for deletion */

if (bestkey == NULL || thisval < bestval) {

bestkey = thiskey;

bestval = thisval;

}

}

}

 

// 删除选定的键值对

/* Finally remove the selected key. */

if (bestkey) {

long long delta;

 

robj *keyobj = createStringObject(bestkey,sdslen(bestkey));

 

// 发布数据更新消息,主要是 AOF 持久化和从机

propagateExpire(db,keyobj);

 

// 注意, propagateExpire() 可能会导致内存的分配, propagateExpire()

提前执行就是因为 redis 只计算 dbDelete() 释放的内存大小。倘若同时计算 dbDelete() 释放的内存

和 propagateExpire() 分配空间的大小,与此同时假设分配空间大于释放空间,就有可能永远退不出这个循环。

// 下面的代码会同时计算 dbDelete() 释放的内存和 propagateExpire() 分配空间的大小:

// propagateExpire(db,keyobj);

// delta = (long long) zmalloc_used_memory();

// dbDelete(db,keyobj);

// delta -= (long long) zmalloc_used_memory();

// mem_freed += delta;

/////////////////////////////////////////

 

/* We compute the amount of memory freed by dbDelete() alone.

* It is possible that actually the memory needed to propagate

* the DEL in AOF and replication link is greater than the one

* we are freeing removing the key, but we can&#39;t account for

* that otherwise we would never exit the loop.

*

* AOF and Output buffer memory will be freed eventually so

* we only care about memory used by the key space. */

// 只计算 dbDelete() 释放内存的大小

delta = (long long) zmalloc_used_memory();

dbDelete(db,keyobj);

delta -= (long long) zmalloc_used_memory();

mem_freed += delta;

 

server.stat_evictedkeys++;

 

// 将数据的删除通知所有的订阅客户端

notifyKeyspaceEvent(REDIS_NOTIFY_EVICTED, "evicted",

keyobj, db->id);

decrRefCount(keyobj);

keys_freed++;

 

// 将从机回复空间中的数据及时发送给从机

/* When the memory to free starts to be big enough, we may

* start spending so much time here that is impossible to

* deliver data to the slaves fast enough, so we force the

* transmission here inside the loop. */

if (slaves) flushSlavesOutputBuffers();

}

}

 

// 未能释放空间,且此时 redis 使用的内存大小依旧超额,失败返回

if (!keys_freed) return REDIS_ERR; /* nothing to free... */

}

return REDIS_OK;

}

Applicable scenarios

Let’s take a look at the applicable scenarios of several strategies:

allkeys-lru: If our application’s access to the cache meets Power law distribution (that is, there are relatively hot data), or we are not clear about the cache access distribution of our application, we can choose the allkeys-lru strategy.

allkeys-random: If our application has equal access probability to cache keys, we can use this strategy.

volatile-ttl: This strategy allows us to prompt Redis which keys are more suitable for eviction.

In addition, the volatile-lru strategy and volatile-random strategy are suitable when we apply one Redis instance to both cache and persistent storage. However, we can also achieve the same result by using two Redis instances. effect, it is worth mentioning that setting the key expiration time will actually consume more memory, so we recommend using the allkeys-lru strategy to use memory more efficiently.

Related recommendations:

mysql video tutorial: https://www.php.cn/course/list/51.html

The above is the detailed content of Introduction to redis data elimination strategy. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:csdn.net. If there is any infringement, please contact admin@php.cn delete