There are two ways to maintain consistency between redis and mysql, which are: 1. Perform the [redis.del(key)] operation before and after writing the database, and set a reasonable timeout; 2. Through The synchronization mechanism based on subscribing to binlog achieves consistency between redis and mysql.
The method to maintain consistency between redis and mysql is to adopt a delayed double delete strategy, first delete the cache and then write to the database; the second method is to update the cache asynchronously. First read Redis and then write mysql and then update the Redis data
The cache and database consistency solutions are as follows:
Method 1: Use delay Double delete strategy
Perform redis.del(key) operation before and after writing the database, and set a reasonable timeout.
The pseudo code is as follows
public void write(String key,Object data){ redis.delKey(key); db.updateData(data); Thread.sleep(500); redis.delKey(key); }
The specific steps are:
(1) Delete the cache first
(2) Then write Database
(3) Sleep for 500 milliseconds
(4) Delete cache again
So, how is this 500 millisecond determined, and how long should it sleep for?
You need to evaluate the time-consuming data reading business logic of your project. The purpose of this is to ensure that the read request ends, and the write request can delete the cached dirty data caused by the read request.
Of course, this strategy also takes into account the time-consuming synchronization between redis and the database master-slave. The final sleep time for writing data: Add a few hundred milliseconds to the time it takes to read data business logic. For example: sleep for 1 second.
Set the cache expiration time
Theoretically, setting the cache expiration time is a solution to ensure eventual consistency. All write operations are subject to the database. As long as the cache expiration time is reached, subsequent read requests will naturally read new values from the database and backfill the cache.
Disadvantages of this solution
Combined with the double delete policy cache timeout setting, the worst case scenario is that the data is inconsistent within the timeout period, and additional write requests are added time consuming.
Method 2: Asynchronous update cache (synchronization mechanism based on subscribing to binlog)
Overall technical idea:
MySQL binlog Incremental subscription consumption message queue incremental data is updated to redis
1) Read Redis: hot data is basically all in Redis
2) Write MySQL: additions, deletions and modifications are all operations MySQL
3) Update Redis data: MySQ’s data operation binlog is updated to Redis
Redis update
(1) Data operations are mainly divided into two blocks:
One is full (write all data to redis at once) and the other is incremental (real-time update)
What we are talking about here is incremental, which refers to update, insert and delete of mysql Change data.
(2) After reading the binlog, analyze it, and use the message queue to push and update the redis cache data of each station.
In this way, once new write, update, delete and other operations occur in MySQL, the binlog related messages can be pushed to Redis, and Redis will update Redis based on the records in the binlog.
In fact, this mechanism is very similar to MySQL's master-slave backup mechanism, because MySQL's master-slave backup also achieves data consistency through binlog.
Here you can use canal (an open source framework of Alibaba), through which you can subscribe to MySQL's binlog, and canal imitates the backup request of mysql's slave database to update Redis data. Achieved the same effect.
Of course, you can also use other third parties for the message push tools here: kafka, rabbitMQ, etc. to implement push updates to Redis
The above is the detailed content of How redis maintains consistency with mysql. For more information, please follow other related articles on the PHP Chinese website!

TograntpermissionstonewMySQLusers,followthesesteps:1)AccessMySQLasauserwithsufficientprivileges,2)CreateanewuserwiththeCREATEUSERcommand,3)UsetheGRANTcommandtospecifypermissionslikeSELECT,INSERT,UPDATE,orALLPRIVILEGESonspecificdatabasesortables,and4)

ToaddusersinMySQLeffectivelyandsecurely,followthesesteps:1)UsetheCREATEUSERstatementtoaddanewuser,specifyingthehostandastrongpassword.2)GrantnecessaryprivilegesusingtheGRANTstatement,adheringtotheprincipleofleastprivilege.3)Implementsecuritymeasuresl

ToaddanewuserwithcomplexpermissionsinMySQL,followthesesteps:1)CreatetheuserwithCREATEUSER'newuser'@'localhost'IDENTIFIEDBY'password';.2)Grantreadaccesstoalltablesin'mydatabase'withGRANTSELECTONmydatabase.TO'newuser'@'localhost';.3)Grantwriteaccessto'

The string data types in MySQL include CHAR, VARCHAR, BINARY, VARBINARY, BLOB, and TEXT. The collations determine the comparison and sorting of strings. 1.CHAR is suitable for fixed-length strings, VARCHAR is suitable for variable-length strings. 2.BINARY and VARBINARY are used for binary data, and BLOB and TEXT are used for large object data. 3. Sorting rules such as utf8mb4_unicode_ci ignores upper and lower case and is suitable for user names; utf8mb4_bin is case sensitive and is suitable for fields that require precise comparison.

The best MySQLVARCHAR column length selection should be based on data analysis, consider future growth, evaluate performance impacts, and character set requirements. 1) Analyze the data to determine typical lengths; 2) Reserve future expansion space; 3) Pay attention to the impact of large lengths on performance; 4) Consider the impact of character sets on storage. Through these steps, the efficiency and scalability of the database can be optimized.

MySQLBLOBshavelimits:TINYBLOB(255bytes),BLOB(65,535bytes),MEDIUMBLOB(16,777,215bytes),andLONGBLOB(4,294,967,295bytes).TouseBLOBseffectively:1)ConsiderperformanceimpactsandstorelargeBLOBsexternally;2)Managebackupsandreplicationcarefully;3)Usepathsinst

The best tools and technologies for automating the creation of users in MySQL include: 1. MySQLWorkbench, suitable for small to medium-sized environments, easy to use but high resource consumption; 2. Ansible, suitable for multi-server environments, simple but steep learning curve; 3. Custom Python scripts, flexible but need to ensure script security; 4. Puppet and Chef, suitable for large-scale environments, complex but scalable. Scale, learning curve and integration needs should be considered when choosing.

Yes,youcansearchinsideaBLOBinMySQLusingspecifictechniques.1)ConverttheBLOBtoaUTF-8stringwithCONVERTfunctionandsearchusingLIKE.2)ForcompressedBLOBs,useUNCOMPRESSbeforeconversion.3)Considerperformanceimpactsanddataencoding.4)Forcomplexdata,externalproc


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Dreamweaver Mac version
Visual web development tools

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 Chinese version
Chinese version, very easy to use

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software
