Home  >  Article  >  Java  >  Memcache mutex design pattern

Memcache mutex design pattern

巴扎黑
巴扎黑Original
2016-12-20 13:48:341328browse

Scenarios
Mutex is mainly used in situations where there are a large number of concurrent accesses and cache expiration, such as

top 10 on the homepage, loaded from the database to the memcache cache for n minutes
The content cache of celebrities in Weibo, once it does not exist, a large number of requests will not be hit And load the database
The data generated by multiple IO operations needs to be stored in the cache, such as querying the db multiple times
Problem
In large concurrency situations, when the cache fails, a large number of concurrent users cannot access the cache at the same time, and the db will be accessed at the same instant And setting back the cache may bring potential overload risks to the system. We have experienced similar failures in our online systems.

Solution
Method 1
Add a mutex key before loading the db. After the mutex key add succeeds, load the db. If the add fails, sleep and retry reading the original cache data. In order to prevent deadlock, the mutex key also needs to set an expiration time. The pseudo code is as follows
(Note: The pseudo code below is only for understanding the idea, there may be bugs, please feel free to point out.)

Java code

if (memcache.get(key) == null) {  
    // 3 min timeout to avoid mutex holder crash  
    if (memcache.add(key_mutex, 3 * 60 * 1000) == true) {  
        value = db.get(key);  
        memcache.set(key, value);  
        memcache.delete(key_mutex);  
    } else {  
        sleep(50);  
        retry();  
    }  
}



Method 2
Set a timeout value (timeout1) inside the value, timeout1 is smaller than the actual memcache timeout (timeout2). When timeout1 is read from the cache and it is found that it has expired, timeout1 is immediately extended and reset to the cache. Then load the data from the database and set it in the cache. The pseudo code is as follows

Java code

v = memcache.get(key);  
if (v == null) {  
    if (memcache.add(key_mutex, 3 * 60 * 1000) == true) {  
        value = db.get(key);  
        memcache.set(key, value);  
        memcache.delete(key_mutex);  
    } else {  
        sleep(50);  
        retry();  
    }  
} else {  
    if (v.timeout <= now()) {  
        if (memcache.add(key_mutex, 3 * 60 * 1000) == true) {  
            // extend the timeout for other threads  
            v.timeout += 3 * 60 * 1000;  
            memcache.set(key, v, KEY_TIMEOUT * 2);  
  
            // load the latest value from db  
            v = db.get(key);  
            v.timeout = KEY_TIMEOUT;  
            memcache.set(key, value, KEY_TIMEOUT * 2);  
            memcache.delete(key_mutex);  
        } else {  
            sleep(50);  
            retry();  
        }  
    }  
}



Compared to option 1
Advantages: avoid large number of requests not being able to obtain mutex and sleep during cache failure
Disadvantages: code complexity increases, so option 1 is generally used It's enough.

Option 2 is also introduced in detail in the Memcached FAQ How to prevent clobbering updates, stampeding requests, and Brad also introduced the method of using another of his favorite tools, Gearman, to implement single-instance cache settings, see Cache miss stampedes, but use Gearman It feels like a bit of a trick to solve it.


Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn