1. If there is data in Redis, it needs to be the same as the value in the database.
2. If there is no data in Redis, Redis must be updated synchronously with the latest value in the database.
Writing to the database also writes to the Redis cache synchronously. The cache is consistent with the data in the database; for read-write cache In other words, to ensure that the data in the cache and database are consistent, it is necessary to ensure a synchronous direct write strategy.
In some business operations, after the MySQL data is updated, Redis data is allowed to be synchronized after a certain period of time, such as logistics systems.
When an abnormal situation occurs, the failed action has to be re-patched, and it needs to be rewritten with the help of rabbitmq or kafka.
If multiple threads query this data in the database at the same time, then we can use a mutex lock on the first request to query the data. Live it.
Other threads will wait until they can't get the lock at this step, wait for the first thread to query the data, and then cache it.
Later threads come in and find that there is already a cache, so they go directly to the cache.
public String get(String key){ // 从Redis缓存中读取 String value = redisTemplate.get(key); if(value != null){ return value; } synchronized (RedisTest.class){ // 重新尝试从Redis缓存中读取 value = redisTemplate.get(key); if(value != null){ return value; } // 从MySQL数据库中查询 value = studentDao.get(key); // 写入Redis缓存 redisTemplate.setnx(key,value,time); return value; } }
If you follow common sense, this should be the case, right? So, what's the problem in this case?
What happens if an exception occurs before updating Redis after successfully updating the database?
The database is inconsistent with the cached data in Redis.
There will be problems in multi-thread situations.
For example
Thread 1 updates redis = 200;
Thread 2 updates redis = 100;
Thread 2 updates MySQL = 100;
Thread 1 updates MySQL = 200;
The result is, Redis= 100, MySQL=200; I’ll wipe it!
Thread 1 deleted the Redis cache data, and then updated the MySQL database;
Before the MySQL update was completed, Thread 2 came to kill , read cached data;
However, the MySQL database has not been updated at this time, thread 2 reads the old value in MySQL, and then thread 2 also writes the old value to Redis as a data cache;
Thread 1After updating the MySQL data, I found that there is already data in Redis, which has been deleted before, so I will not update it;
It’s over. .
Delayed double deletion can solve the above problem, as long as the sleep time is greater than the time for thread 2 to read data and then write to the cache, that is, thread 1 The second cache clearing operation must be performed after thread 2 writes to the cache, so as to ensure that the data in the Redis cache is up to date.
/** * 延时双删 * @autor 哪吒编程 */ public void deleteRedisData(Student stu){ // 删除Redis中的缓存数据 jedis.del(stu); // 更新MySQL数据库数据 studentDao.update(stu); // 休息两秒 try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } // 删除Redis中的缓存数据 jedis.del(stu); }
The biggest problem with delayed double deletion is sleep. Today when efficiency is king, it is better not to use sleep.
I think you are slow even if you don’t sleep, but you still fall asleep...
Thread 1 updates the database first, and then deletes the Redis cache;
Thread 2 initiates a request before thread 1 deletes the Redis cache, and obtains the undeleted Redis cache;
Thread 1 only deletes the Redis cache data at this time;
The problem still exists, and it goes back and forth endlessly.
How to solve this situation?
Introduce message middleware to solve the battle, and review it in detail again.
Update the database;
The database writes the operation information to the binlog log;
Subscribe program Extract the key and data;
Try to delete the cache operation and find that the deletion failed;
Send these data information to the message middleware;
Get the data from the message middleware and re-operate;
Nezha recommends using the Four ways, first update the database and then delete the cache.
The shortcomings of method ① and method ② are too obvious to be considered;
The sleep in method ③ is always a headache;
Method ④ is a more comprehensive solution, but it increases learning Cost and maintenance cost due to the addition of message middleware.
1. When the data on the master server changes, its changes are written into binary events. In the log file;
2. The salve slave server will detect the binary log on the master server within a certain time interval to detect whether it has changed.
If the master server is detected If the server's binary event log changes, an I/O Thread is started to request the master binary event log;
3. At the same time, the master server starts a dump Thread for each I/O Thread to send data to it. Send binary event log;
4. Slave saves the received binary event log from the server to its own local relay log file;
5. The salve slave server will start the SQL Thread to read the binary log from the relay log and replay it locally to make its data consistent with the main server;
6. The final I/O Thread and SQL Thread will go to sleep and wait for the next time it is awakened.
The above is the detailed content of What is the update strategy for MySQL database and Redis cache consistency?. For more information, please follow other related articles on the PHP Chinese website!