What are the high availability solutions for redis? This article will introduce you to the high availability solutions in redis. I hope it will be helpful to you!
Redis is usually not deployed individually, otherwise it will not cause a single point of failure. So what are the high availability solutions for redis?
Users can use the SLAVEOF command or configuration to let one server replicate another server. The replicated server is called the master server, and the replicating server is called the slave server. In this way, you add the key value on the master server and read it on the slave server at the same time. [Related recommendations: Redis Video Tutorial]
The replication process is divided into two steps: synchronization and command propagation.
Synchronization
Synchronization will update the database status of the server to the current database status of the master server.
When the client sends the SLAVEOF command to the slave server, the slave server will issue the SYNC command
to the master server for synchronization. The steps are as follows:
A SYNC command is issued from the server to the master server.
The main server that receives the SYNC command executes the BGSAVE command, generates an RDB file in the background, and uses a buffer to record all write commands executed from now on.
After the master server's BGSAVE command is executed, the master server sends the RDB file generated by BGSAVE to the slave server. The slave server receives and loads the RDB file, and the database status from the server is Update to the database status when the master server executes the BGSAVE command
The master server sends all write commands in the buffer to the slave server, and the slave server executes these write commands and updates the database status to the master The current database status of the server.
##Command propagation
After the synchronization operation is completed, the master The database status of the server and the slave server is consistent, but after the master server receives the client write command, data inconsistency occurs between the master and slave databases. At this time, the database is consistent through command propagation.
Optimization of PSYNC synchronization
The synchronization before 2.8 is full synchronization every time, and if it is from the server You just disconnected for a while. In fact, you don't need to synchronize from the beginning. You only need to synchronize the data during the period of disconnection. So version 2.8 began to use PSYNC instead of the SYNC command.PSYNC is divided into two situations: full synchronization and partial synchronization. Full synchronization is to handle the initial synchronization state, while partial synchronization is to handle the situation of disconnection and reconnection.
Implementation of partial synchronization
Partial synchronization mainly uses the following three parts:Replication offset
The replication offset of the master server: Each time the master server transmits N bytes of data to the slave server, it changes its own replication offset to the slave server's replication offset. Shift : Each time the slave server receives N bytes of data propagated by the master server, it changes its own replication offset by N If the master and slave servers are in a consistent state, their offsets are always the same. If the offsets are not equal, they are in an inconsistent state.
Copy backlog bufferThe copy backlog buffer is a fixed-length FIFO queue maintained by the main server. The default size is 1MB. After reaching the maximum length, the first queue is entered. The elements in the queue will be ejected to make way for the newly joined elements.
When the redis command is propagated, it will not only be sent to the slave server, but also to the replication backlog buffer.When the slave server reconnects to the master server, the slave server will send its own replication offset offset to the master server through the PSYNC command, and the master server will use the replication offset Determine whether to use partial synchronization or full synchronization based on the amount. If the data after the offset offset is still being copied to the backlog buffer, then partial synchronization is used, otherwise full synchronization is used.
(The book doesn’t say how to judge. I guess it should be the master copy offset minus the slave copy offset. If it is greater than 1MB, it means there is data that is not in the buffer backlog?)
#When the server starts, it will generate a 40-digit random character as the server running ID.
When the slave server replicates to the master server for the first time, the master server will transmit its running ID to the slave server, and the slave server will save this running ID. When the slave server is disconnected and reconnected, the saved running ID will be sent to it. If the running ID saved by the slave server is the same as the running ID of the current master server, partial synchronization will be attempted. If they are different, full synchronization will be performed.
The overall process of PSYNC
##Heartbeat detection
In the command propagation phase, the slave server will send commands to the master server at a frequency of once per second by default:
REPLICONF ACK where replication_offset is the current replication offset of the slave server. Shift amount.
Sending the REPLICONF ACK command has three functions for the master and slave servers:
Detect the network connection status of the master and slave servers
The master and slave servers can check the network connection status between the two by sending and receiving REPLICONF ACK commands. Whether the network connection is normal: If the master server does not receive the REPLICONF ACK command from the slave server for more than one second, then the master server will know that there is a problem between the master and slave.Auxiliary implementation of min-slaves option
redis’smin-slaves-to-write and
min-slaves-max-lag Two options can prevent the master-slave server from executing write commands in unsafe situations.
min-slaves-to-write 3 min-slaves-max-lag 10If configured as above, it means that if the number of slave servers is less than 3, or the delay of all 3 slave servers is greater than or equal to 10 seconds, then the master server will refuse to execute the write command.
Detect command loss
If the write command propagated from the master server to the slave server is lost halfway due to a network failure, then when the slave server sends the REPLICONF ACK command to the master server , the master server will find that the current replication offset of the slave server is less than its own offset, then the master server can find the missing data from the slave server in the replication buffer based on the replication offset of the slave server, and re- Writes are sent to the slave server.Master-slave replication summary
In fact, master-slave replication backs up an extra copy of data, because even if there are RDB and AOF for persistence, the master-slave replication may If the entire machine on the server hangs up, master-slave replication can deploy the master and slave servers on two different machines. In this way, even if the master server machine hangs up, you can manually switch to the slave server to continue the service.
Although the master-slave implements data backup, when the master server hangs up, you need to manually switch from the slave server to the master server. Sentinel can automatically switch from the secondary server to the primary server when the primary server hangs up.The sentinel system can monitor all master and slave servers. Assume that server1 is offline now. When the offline time of server1 exceeds the upper limit of offline time set by the user, the sentinel system will perform failover of server1:
Initialize sentinel state
struct sentinelState { char myid[CONFIG_RUN_ID_SIZE+1]; // 当前纪元,用于实现故障转移 uint64_t current_epoch; // 保存了所有被这个sentinel监视的主服务器 // 字典的键是主服务器的名字 // 字典的值是指向sentinelRedisInstance结构的指针 dict *masters; // 是否进入了TILT模式 int tilt; // 目前正在执行的脚本数量 int running_scripts; // 进入TILT模式的时间 mstime_t tilt_start_time; // 最后一次执行时间处理器的时间 mstime_t previous_time; // 一个fifo队列,包含了所有需要执行的用户脚本 list *scripts_queue; char *announce_ip; int announce_port; unsigned long simfailure_flags; int deny_scripts_reconfig; char *sentinel_auth_pass; char *sentinel_auth_user; int resolve_hostnames; int announce_hostnames; } sentinel;
Initialize masters attribute of sentinel state
masters records information related to all master servers monitored by sentinel. The key of the dictionary is the name of the monitored server, and the value is the sentinelRedisInstance structure corresponding to the monitored server. sentinelRedisInstance is an instance monitored by the sentinel server, which can be a master server, a slave server, or other sentinel instances.typedef struct sentinelRedisInstance { // 标识值,记录实例的类型,以及该实例的当前状态 int flags; // 实例的名字 // 主服务器名字在配置文件中设置 // 从服务器和sentinel名字由sentinel自动设置,格式是ip:port char *name; // 运行id char *runid; // 配置纪元,用于实现故障转移 uint64_t config_epoch; // 实例的地址 sentinelAddr *addr; /* Master host. */ // 实例无响应多少毫秒之后,判断为主观下线 mstime_t down_after_period; // 判断这个实例为客观下线所需的支持投票数量 unsigned int quorum; // 执行故障转移,可以同时对新的主服务器进行同步的从服务器数量 int parallel_syncs; // 刷新故障迁移状态的最大时限 mstime_t failover_timeout; // 除了自己外,其他监视主服务器的sentinel // 键是sentinel的名字,格式是ip:port // 值是键对应的sentinel的实例结构 dict *sentinels; // ... } sentinelRedisInstance;
Create a network connection to the main server
The last step in initializing sentinel is to create a network connection to the monitored main server. Two connections to the main server will be created.
Command connection: Send commands specifically to the main server and receive command replies.
Subscription connection: Specially used to subscribe to the _sentinel_:hello channel of the main server.
Get the main server information
Sentinel will send INFO command to the monitored main server through the command connection every 10 seconds by default, and obtain the current information of the main server through the reply. Reply to get the following information.
The name dictionary and runid field under sentinelRedisInstance can be updated based on this information.
Get slave server information
sentinel will also create command connections and subscription connections to the slave server.
By default, sentinel will send an INFO command to the slave server through the command connection every 10 seconds, and obtain the current information from the slave server through the reply. The reply is as follows:
According to info Based on the reply information, sentinel can update the instance structure of the slave server.
Send information to the subscription connections of the master server and slave server
By default, sentinel will send information to the monitored master server and slave server once every 2 seconds Order.
s_ip
:sentinel’s ip addresss_port
:sentinel’s port numbers_runid
:sentinel's running ids_epoch
:sentinel's current configuration epochm_name
:The name of the primary serverm_ip
:primary The server's ip address m_port
: The port number of the main server m_epoch
: The current configuration epoch of the main server
Sends information to the sentinel_:hello channel, and also Monitored by other sentinels monitoring the same server (including yourself).
Create command connections to other sentinels
Sentinels will create command connections to each other. Multiple sentinels monitoring the same command will form an interconnected network.
#No subscription connection will be created between sentinels.
Detecting subjective offline status
sentinel will send a message to all instances with which it has created a command connection (main server) once every second , from the server, other sentinel) to send the ping command, and determine whether the instance is online through the instance's reply.
Valid reply: The instance returns one of PONG, -LOADING, and -MASTERDOWN.
Invalid reply: Replies other than the above three types of replies, or no reply within the specified time.
An instance continuously returns invalid replies to sentinel within down-after-milliseconds
milliseconds. Then sentinel will modify the instance structure corresponding to this instance and turn on the SRI_S_DOWN flag in the flags attribute of the structure to indicate that the instance has entered the subjective offline state. (down-after-milliseconds can be configured in the sentinel configuration file)
Detect objective offline status
When sentinel will After a main server is judged to be subjectively offline, in order to confirm whether the main server is really offline, other sentinels that also monitor the main server will be asked to see if other sentinels also think that the main server is offline. If the number exceeds a certain number, the main server will be judged to be objectively offline.
Ask other sentinels if they agree to take the server offline
SENTINEL is-master-down-by-addr <ip><port><runid></runid></port></ip>
Query through the SENTINEL is-master-down-by-addr command. The meaning of the parameters is as follows:
Receive the SENTINEL is-master-down-by-addr command
After other sentinels receive the SENTINEL is-master-down-by-addr command, they will Based on the IP and port of the main server, check whether the main server is offline, and then return a Multi Bulk reply containing three parameters.
Sentinel counts the number of other sentinels that agree that the main server has been offline. When the configured number is reached, the SRI_O_DOWN flag of the flags attribute of the main server is turned on, indicating that the main server has been offline. Enter the objective offline state.
Elect the leader sentinel
When a master server is judged to be objectively offline, each sentinel monitoring the offline master server will negotiate to elect a new leader sentinel, and this sentinel will perform failover operations.
After confirming that the master server has entered the objective offline state, the SENTINEL is-master-down-by-addr command will be sent again to elect the leader sentinel.
Election rules
SENTINEL is-master-down-by-addr
command to another sentinel, if the value of the runid parameter is not *, but the runid of the source sentinel, it means You want the target sentinel to set itself as the lead sentinel. First come, first served
. After the first sentinel is set as the local leader, all other requests will be rejected. Failover
Failover includes the following three steps:
Among all the slave servers under the offline master server, select a slave server and convert it to Main server.
Let all slave servers under the offline master server be copied to the new master server.
Set the offline master server as the slave server of the new server. After the old master server comes back online, it becomes the slave server of the new master server.
Select a new master server
All slave servers under the offline master server , select a slave server, send the SLAVEOF no one command to the slave server, and convert the slave server into the master server.
Rules for selecting a new master server
The leading sentinel will save all the slave servers of the offline master server into a list, and then select this Filter the list to pick out the new master server.
Delete all slave servers in the list that are offline or disconnected.
Delete all slave servers in the list that have not responded to the INFO command of the leading sentinel in the last five seconds
Delete all servers that are offline Servers whose connections are disconnected for more than dwon-after-milliseconds * 10 milliseconds
Then sort the remaining slave servers in the list according to the priority of the slave server, and select the priority among them The highest server.
If there are multiple slave servers with the same highest priority, then they are sorted according to the replication offset and the slave server with the largest offset is selected (the slave server with the largest replication offset is also It means that the data it saves is the latest)
If the replication offset is the same, then sort according to the runid and select the slave server with the smallest runid
After sending the slaveof no one command, the leader sentinel will send the info command to the upgraded slave server once every second (usually once every 10 seconds). If the returned reply role changes from the original slave to the master, then the leader Sentinel will know that the slave server has been upgraded to the master server.
Modify the replication target of the slave server
通过SLAVEOF命令来使从服务器复制新的主服务器。当sentinel监测到旧的主服务器重新上线后,也会发送SLAVEOF命令使它成为新的主服务器的从服务器。
sentinel总结
sentinel其实就是一个监控系统,,而sentinel监测到主服务器下线后,可以通过选举机制选出一个领头的sentinel,然后由这个领头的sentinel将下线主服务器下的从服务器挑选一个切换成主服务器,而不用人工手动切换。
哨兵模式虽然做到了主从自动切换,但是还是只有一台主服务器进行写操作(当然哨兵模式也可以监视多个主服务器,但需要客户端自己实现负载均衡)。官方也提供了自己的方式实现集群。
节点
每个redis服务实例就是一个节点,多个连接的节点组成一个集群。
CLUSTER MEET <ip><port></port></ip>
向另一个节点发送CLUSTER MEET命令,可以让节点与目标节点进行握手,握手成功就能将该节点加入到当前集群。
启动节点
redis服务器启动时会根据cluster-enabled配置选项是否为yes来决定是否开启服务器集群模式。
集群数据结构
每个节点都会使用一个clusterNode结构记录自己的状态,并为集群中其他节点都创建一个相应的clusterNode结构,记录其他节点状态。
typedef struct clusterNode { // 创建节点的时间 mstime_t ctime; // 节点的名称 char name[CLUSTER_NAMELEN]; // 节点标识 // 各种不同的标识值记录节点的角色(比如主节点或从节点) // 以及节点目前所处的状态(在线或者下线) int flags; // 节点当前的配置纪元,用于实现故障转移 uint64_t configEpoch; // 节点的ip地址 char ip[NET_IP_STR_LEN]; // 保存建立连接节点的有关信息 clusterLink *link; list *fail_reports; // ... } clusterNode;
clusterLink保存着连接节点所需的相关信息
typedef struct clusterLink { // ... // 连接的创建时间 mstime_t ctime; // 与这个连接相关联的节点,没有就为null struct clusterNode *node; // ... } clusterLink;
每个节点还保存着一个clusterState结构,它记录了在当前节点视角下,集群目前所处的状态,例如集群在线还是下线,集群包含多少个节点等等。
typedef struct clusterState { // 指向当前节点clusterNode的指针 clusterNode *myself; // 集群当前的配置纪元,用于实现故障转移 uint64_t currentEpoch; // 集群当前的状态,上线或者下线 int state; // 集群中至少处理一个槽的节点数量 int size; // 集群节点的名单(包括myself节点) // 字典的键是节点的名字,字典的值为节点对应的clusterNode结构 dict *nodes; } clusterState;
CLUSTER MEET 命令的实现
CLUSTER MEET <ip><port></port></ip>
节点 A 会为节点 B 创建一个clusterNode结构,并将该结构添加到自己的clusterState.nodes 字典里面。
之后,节点 A 将根据 CLUSTER MEET 命令给定的 IP 地址和端口号,向节点 B 发送一条 MEET 消息。
如果一切顺利,节点 B 将接收到节点 A 发送的 MEET 消息,节点 B 会为节点 A 创建一个clusterNode结构,并将该结构添加到自己的clusterState.nodes字典里面。
之后,节点 B 将向节点 A 返回一条 PONG 消息。
如果一切顺利,节点 A 将接收到节点 B 返回的 PONG 消息,通过这条 PONG 消息节点 A 可以知道节点 B 已经成功地接收到了自己发送的 MEET 消息。
之后,节点 A 将向节点 B 返回一条 PING 消息。
如果一切顺利,节点B将接收到节点A返回的PING消息,通过这条PING消息节点B知道节点A已经成功接收到自己返回的PONG消息,握手完成。
槽指派
集群的整个数据库被分为16384个槽,每个键都属于16384个槽的其中一个,集群中每个节点处理0个或16384个槽。当所有的槽都有节点在处理时,集群处于上线状态,否则就是下线状态。
CLUSTER ADDSLOTS
CLUSTER ADDSLOTS <slot>...</slot>
通过CLUSTER ADDSLOTS命令可以将指定槽指派给当前节点负责,例如:CLUSTER ADDSLOTS 0 1 2 3 4 可以将0至4的槽指派给当前节点
记录节点的槽指派信息
clusterNode结构的slots属性和numslot属性记录了节点负责处理哪些槽:
typedef struct clusterNode { unsigned char slots[CLUSTER_SLOTS/8]; int numslots; // ... } clusterNode;
slots:
是一个二进制数组,一共包含16384个二进制位。当二进制位的值是1,代表节点负责处理该槽,如果是0,代表节点不处理该槽numslots:
numslots属性则记录节点负责处理槽的数量,也就是slots中值为1的二进制位的数量。
传播节点的槽指派信息
节点除了会将自己负责的槽记录在clusterNode中,还会将slots数组发送给集群中的其他节点,以此告知其他节点自己目前负责处理哪些槽。
typedef struct clusterState { clusterNode *slots[CLUSTER_SLOTS]; } clusterState;
slots包含16384个项,每一个数组项都是指向clusterNode的指针,表示被指派给该节点,如果未指派给任何节点,那么指针指向NULL。
CLUSTER ADDSLOTS命令的实现
在集群中执行命令
客户端向节点发送与数据库有关的命令时,接收命令的节点会计算出命令要处理的数据库键属于哪个槽,并检查该槽是否指派给了自己。
如果指派给了自己,那么该节点直接执行该命令。如果没有,那么该节点会向客户端返回一个MOCED的错误,指引客户端转向正确的节点,并再次发送执行的命令。
计算键属于那个槽
CRC16(key)是计算出键key的CRC16的校验和,而 & 16383就是取余,算出0-16383之间的整数作为键的槽号。
判断槽是否由当前节点负责处理
计算出键所属的槽号i后,节点就能判断该槽号是否由自己处理。
如果clusterState.slots[i]等于如果clusterState.myself,那么由自己负责该节点可以直接执行命令。
如果不相等,那么可以获取clusterState.slots[i]指向如果clusterNode的ip和端口,向客户端返回MOVED错误,指引客户端转向负责该槽的节点。
集群模式下不会打印MOVED错误,而是直接自动转向。
重新分片
redis集群重新分配可以将任意数量已经指派给某个节点的槽改为指派给另一个节点,相关槽所属的键值对也会从源节点移动到目标节点。
重新分片操作是在线进行的,在重新分片的过程中,集群不用下线,源节点和目标节点都可以继续处理命令请求。
redis集群的重新分片操作是由redis-trib负责执行。重新分片执行步骤如下:
redis-trib对目标节点发送CLUSTER SETSLOT <slot> IMPORTING <source_id></source_id></slot>
命令,让目标节点准备好从源节点导入槽slot的键值对。
redis-trib对源节点发送CLUSTER SETSLOT <slot> MIGRTING <target_id></target_id></slot>
命令,让源节点准备好将属于槽slot的键值对迁移至目标节点。
redis-trib向源节点发送CLUSTER GETKEYSINSLOT <slot> <count></count></slot>
命令,获取最多count个属于槽的键值对的键名称。
对于步骤3获取的每个键名,redis-trib都向源节点发送一个MIGRTING <target_ip> <target_port> <key_name> 0 <timeout></timeout></key_name></target_port></target_ip>
命令,将被选中的键值对从源节点迁移至目标节点。
重复执行步骤3和步骤4,直到源节点保存的所以属于槽slot的键值对都被迁移至目标节点。
redis-trib向集群中任何一个节点发送CLUSTER SETSLOT <slot> NODE <target_id></target_id></slot>
命令,将槽指派给目标节点。这一信息最终会通过消息发送至整个集群。
CLUSTER SETSLOT IMPORTING 命令实现
typedef struct clusterState { // ... clusterNode *importing_slots_from[CLUSTER_SLOTS]; } clusterState;
importing_slots_from记录了当前节点正在从其他节点导入的槽。importing_slots_from[i]不为null,则指向CLUSTER SETSLOT <slot> IMPORTING <source_id></source_id></slot>
命令,
CLUSTER SETSLOT MIGRTING 命令实现
typedef struct clusterState { // ... clusterNode *migrating_slots_to[CLUSTER_SLOTS]; } clusterState;
migrating_slots_to记录了当前节点正在迁移至其他节点的槽。migrating_slots_to[i]不为null,则指向迁移至目标节点所代表的clusterNode结构。
ASK错误
在重新分片期间,源节点向目标节点迁移槽的过程中,可能属于这个槽的一部分键值对一部分保存在源节点当中,而另一部分保存在目标节点当中。
客户端向源节点发送一个与数据库键有关的命令,恰好这个槽正在被迁移。
源节点现在自己的数据库中查找指定的键,如果找到,直接执行。
如果没有找到,节点会检查migrating_slots_to[i]查看键是否正在迁移,如果在迁移就返回一个ask错误,引导客户端转向目标节点。
ASKING
客户端收到ask错误之后,会先执行ASKING命令,再向目标节点发送命令。ASKING命令就是打开发送该命令的客户端的REDIS_ASKING标识。一般来说客户端发送的键如果不属于自己负责会返回MOVED错误(槽只迁移部分,这时槽还不属于目标节点负责),但还会检查importing_slots_from[i],如果显示节点正在导入槽i,并且发送命令的客户端带有REDIS_ASKING标识,那么它就会破例执行一次该命令。
集群的故障转移
集群的故障转移效果和哨兵模式类似,也是将从节点升级成主节点。旧的主节点重新上线后将会成为新主节点的从节点。
故障检测
集群中每个节点会定期的向集群中其他节点发送PING消息,检测对方是否在线,如果指定时间内没有收到PONG消息,那么就将该节点标记为疑似下线。clusterState.nodes字典中找到该节点的clusterNode结构,将flags属性修改成REDIS_NODE_PFAIL标识。
集群中各个节点会互相发送消息来交换集群中各个节点的状态,例如:主节点A得知主节点B认为主节点C进入了疑似下线状态,主节点A会在clusterState.nodes字典中找到节点C的clusterNode结构,并将主节点B的下线报告添加到clusterNode结构的fail_reports链表当中。
每一个下线报告由一个clusterNodeFailReport结构表示
typedef struct clusterNodeFailReport { struct clusterNode *node; // 最后一次收到下线报告的时间 mstime_t time; } clusterNodeFailReport;
如果一个集群当中,半数以上负责处理槽的主节点都将某个主节点X报告为疑似下线。那么这个主节点X将被标记为已下线。将主节点X标记成已下线的节点会向集群广播一条关于主节点X的FAIL消息。所有收到这条FAIL消息的节点都会将主节点X标记成已下线。
故障转移
当一个从节点发现自己正在复制的主节点进入了已下线状态,从节点将开始对下线主节点进行故障转移。
复制下线主节点的所有从节点,会有一个主节点被选中。
被选中的从节点会执行SLAVEOF no one 命令,成为新的主节点。
新的主节点会撤销所有对已下线主节点的槽指派,并将这些槽全部指派给自己。
新的主节点向集群广播一条PONG消息,这条PONG消息可以让集群中的其他节点立即知道这个节点已经由从节点变成主节点。这个主节点已经接管了已下线节点负责处理的槽。
新的主节点开始接收和自己负责处理的槽有关的命令请求,故障转移完成。
选举新的主节点
新的主节点通过选举产生
集群的配置纪元是一个自增计数器,它的初始值为0。
当集群的某个节点开始一次故障转移操作,集群的配置纪元的值加1。
对于每个配置纪元,集群里每个负责处理槽的主节点都有一次投票的机会,第一个想主节点要求投票的从节点将获得主节点的投票。
当从节点发现自己正在复制的主节点进入已下线状态时,从节点会向集群广播一条CLUSTERMSG_TYPE_FAILOVER_AUTH_REQUEST消息,要求所有收到这条消息,并具有投票权的主节点向这个从节点投票。
如果一个主节点具有投票权(它正在负责处理槽),并且这个主节点尚未投票给其他从节点,那么主节点将向要求投票的从节点返回一条CLUSTERMSG_TYPE_FAILOVER_AUTH_ACK消息,表示这个主节点支持从节点成为新的主节点。
每个参与选举的从节点都会接收CLUSTERMSG_TYPE_FAILOVER_AUTH_ACK消息,并根据自己收到了多少条这种消息来统计自己获得了多少主节点的支持。
如果集群里有 N 个具有投票权的主节点,那么当一个从节点收集到大于等于 N / 2 + l 张支持票时,这个从节点就会当选为新的主节点。
因为在每一个配置纪元里面,每个具有投票权的主节点只能投一次票,所以如果有 N 个主节点进行投票,那么具有大于等于 N / 2 + l 张支持票的从节点只会有一个,这确保了新的主节点只会有一个。
如果在一个配置纪元里面没有从节点能收集到足够多的支持票,那么集群进人一个新的配置纪元,并再次进行选举,直到选出新的主节点为止。
主节点选举的过程和选举领头sentinel的过程非常相似。
主从复制数据丢失
主从复制之间是异步执行的,有可能master的部分数据还没来得及同步到从数据库,然后master就挂了,这时这部分未同步的数据就丢失了。
脑裂
脑裂就是说,某个master所在机器突然脱离了正常的网络,跟其他slave机器不能连接,但是实际上master还运行着。此时哨兵可能就会认为master 宕机了,然后开启选举,将其他slave切换成了master,这个时候,集群里面就会有2个master,也就是所谓的脑裂。
此时虽然某个slave被切换成了master,但是可能client还没来得及切换到新的master,还继续向旧master的写数据。
master再次恢复的时候,会被作为一个slave挂到新的master上去,自己的数据将会清空,重新从新的master复制数据,导致数据丢失。
减少数据丢失的配置
min-slaves-to-writ 1 min-slaves-max-lag 10
上述配置表示,如果至少有1个从服务器超过10秒没有给自己ack消息,那么master不再执行写请求。
当从数据库因为网络原因或者执行复杂度高命令阻塞导致滞后执行同步命令,导致数据同步延迟,造成了主从数据库不一致。
都看到这了,点个赞再走了吧:)
更多编程相关知识,请访问:编程入门!!
The above is the detailed content of Let's talk about the high availability solutions in redis!. For more information, please follow other related articles on the PHP Chinese website!