搜尋
首頁資料庫mysql教程Redis Cluster and limiting divergences.

Redis Cluster is finally on its road to reach the first stable release in a short timeframe as already discussed in the Redis google group [1]. However despite a design never proposed for the implementation of Redis Cluster was analyzed an

Redis Cluster is finally on its road to reach the first stable release in a short timeframe as already discussed in the Redis google group [1]. However despite a design never proposed for the implementation of Redis Cluster was analyzed and discussed at long in the past weeks (unfortunately creating some confusion: many people, including notable personalities of the NoSQL movement, confused the analyzed proposal with Redis Cluster implementation), no attempt was made to analyze or categorize Redis Cluster itself.

I believe that putting in perspective the simple ideas that Redis Cluster implements, in a more formal way, is an interesting exercise for the following reason: Redis Cluster is not a design that tries to achieve “AP” or “CP” of the CAP theorem, since for its goals CAP Availability and CAP Consistency are too hard goals to reach without sacrificing other practical qualities.

Once a design does not try to maximize what is theoretically possible, the design space becomes much larger, and the implementation main goal is to try to provide some Availability and some reasonable form of Consistency, in the face of other conflicting design requirements like asynchronous replication of data.

The goal of this article is to reply to the following question: how Redis Cluster, as an asynchronous system, tries to limit divergences between nodes?

Nodes divergence
===

One of the main problems with asynchronous systems is that a master accepting requests never actually knows in a given moment if it is still the authoritative master or a stale one. For example imagine a cluster of three nodes A, B, C, with one replica each, A1, B1, C1. When a master is partitioned away with some client, A1 may be elected in the other side as the new master, but A is not able, for every request processed, to verify with the other nodes if the request should be accepted or not. At best node A can get asynchronous acknowledges from replicas.

Every time this happens, two parallel time lines for the same set of data is created, one in A, and one in A1. In Redis Cluster there is no way to merge data as explained in [2], so you can imagine the merge function between A and A1 data set when the partition heals as just picking one time line between all the time lines created (two in this case).

Another case that creates different time lines is the replication process itself.

A master A may have three replicas A1, A2, A3. Because of the very concept of asynchronous replication, each of the slaves may represent a point in time in the past history of the time line of A. Usually since replication in Redis is very fast and has minimal delay, as data is transmitted to slaves at the same time as the reply is transmitted to the writing client, the time “delta” between A and the slaves is small. However slaves may be lagging for some reason, so it is possible that, for example, A1 is one second in the past.

Asynchronously replicated nodes can’t really avoid diverging, however there is no need for the divergence between replicas to be unbound. Actually systems allowing replicas to diverge in an uncontrolled way may be hard to use in practice, even when for use case does not requiring strong consistency, that is the target of Redis Cluster.

Putting bounds to divergence
===

Redis cluster uses a few heuristics in order to limit divergence of nodes. The algorithms employed are very simple, consisting merely on the following four rules that nodes follow.

1) When a master is isolated from the majority for node-timeout (that is the user configured time after a non responding node is considered to be failing by the failure detection algorithm), it stops accepting queries from clients. It is easy to see how this helps in practice: in the majority side of the cluster, no slave is able to get elected and replace the master before node-timeout has elapsed. However, after node-timeout time has elapsed, the isolated master knows that it is possible that a parallel history was created in the other side of the cluster, and this history will win over its history, so it stops accepting data that will be otherwise lost. This means that if a master is partitioned away together with one or more clients, the window for data loss is node-timeout, that is usually in the order of 500 — 1000 milliseconds. If the partition heals before node-timeout, no data loss happens as the master rejoins the cluster as master.

2) A related problem is, when a master is failing, how to pick the “best” history among the available histories for the same data (depending on the number of slaves). An algorithm is used in order to give an advantage to the slave with the most updated replication offset to be elected, that is, the slave that likely has the most recent data compared to the master that went down. However if the best slave fails to be elected the other slaves will try an election as well.

3) If you think at “2” you’ll see that actually this means that the divergence between a master and its slaves is unbound. A slave can be lagging for hours in theory. So actually there is another heuristic in use, consisting on slaves to don’t try to get elected at all if the last time they received data from the master is too much in the past. This maximum time is currently set to ten times node-timeout, however it will be user-configurable in the stable release of Redis Cluster. While usually the lag between master and slaves is in the sub-millisecond figure, this time limit ensures that the worst case scenario is the following:

- A slave is stopped/unavailable for just a little less than ten times node-timeout.
- Its master fails.
- At the same time the slave returns back available.

Rejoins went wrong
===

There is another failure mode that was worth covering in Redis Cluster as it is in some way a different instance of the same problems covered so far. What happens if a master rejoins the cluster after it was already failed over, and there are still clients with non updated configuration writing to it?

This may happens in two main ways: rejoining the majority after a partition, and restarting the process. The failure mode is conceptually the same, the master is not able to get a synchronous acknowledge from other replicas about every write, and the other nodes may take a few milliseconds before being able to reconfigure a node that just rejoined (the node is usually reconfigured with an UPDATE message as soon as it is detected to have a stale configuration: this usually happens immediately once the rejoining instance pings another node or sends a pong in reply to a ping).

Rejoins are handled with another heuristic:

4) When a node rejoins the majority, or is restarted, it waits a small time (but yet a few order of magnitudes bigger than the usual network latency), before accepting writes again, in order to maximize the probability to get reconfigured before accepting writes from clients with stale information.

History re-play
===

Some of the Redis data structures and operations are commutative. Obvious examples are INCR and SADD, the order of operations does not matter and eventually the Set or the counter will have the same exact value as long as all the operations are executed.

Because of this observation, and since Redis instances get asynchronous acknowledges from slaves about how much data was processed, it is possible for partitioned masters to remember commands sent by clients that are still not acknowledged by all the replicas.

This trick is able to improve data safety in a way similar to AP systems, but while in AP systems merging values is used, Redis Cluster would reply commands from clients instead.

I proposed this idea in a blog post [3] some time ago, however this is a practical example of what would happen implementing it in a next version of Redis Cluster:

- A master gets partitioned away with clients.
- Clients write to the master for node-timeout time.
- The master starts returning errors.
- In the majority side, a slave is elected as the new master.
- When the old master rejoins it is reconfigured as a replica of the new master.

So far this is the description of what happens currently. With node replying what would happen is that all the writes not acknowledged, from the time the partition is created, to the time the master starts to reply with errors, are accumulated. When the partition heals, as part of turning the old master into a slave, the old master would connect with the new master, and re-play the accumulated stream of commands.

How all this is tested?
===

Some time ago I wrote about what I use in order to test Redis Cluster [4]. The most valuable tool I found so far is a simple consistency-test that is part of the redis-rb-cluster project [5] (a Ruby Redis Cluster client). Basically stress testing the system is as simple as keeping the consistency test running, while simulating different partitions, restarts, and other failures in the cluster.

This test was so useful and is so simple to run, that I’m actually starting to think that everybody running a distributed database in production, whatever it is Cassandra or Zookeeper or Redis or whatever else, should keep a similar test running against the production system as a way to monitor what is happening.

Such tests are lightweight to run, they can be made to just set a few keys per second. However they can easily detect issues with the implementation or other unexpected consistency issues. Especially systems merging data that are at the same time always available, such as AP systems, have the tendency to “mask” bugs: it takes some luck to discover some consistency leak in real data sets. With a simple consistency test running instead it is possible to monitor how the system is behaving.

The following for example is the output of constency-test.rb running against my testing environment:

27523967 R (7187 err) | 27523968 W (7186 err) | 12 noack |

So I know that I read/wrote 27 million of times during my testing, and 12 writes that received no acknowledge were actually materialized inside the database.

Notes:

[1] https://groups.google.com/d/msg/redis-db/2laQRKBKkYg/ssaiQLhasNkJ
[2] http://antirez.com/news/67
[3] http://antirez.com/news/68
[4] http://antirez.com/news/69
[5] https://github.com/antirez/redis-rb-cluster Comments
陳述
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn
MySQL與Sqlite有何不同?MySQL與Sqlite有何不同?Apr 24, 2025 am 12:12 AM

MySQL和SQLite的主要區別在於設計理念和使用場景:1.MySQL適用於大型應用和企業級解決方案,支持高性能和高並發;2.SQLite適合移動應用和桌面軟件,輕量級且易於嵌入。

MySQL中的索引是什麼?它們如何提高性能?MySQL中的索引是什麼?它們如何提高性能?Apr 24, 2025 am 12:09 AM

MySQL中的索引是數據庫表中一列或多列的有序結構,用於加速數據檢索。 1)索引通過減少掃描數據量提升查詢速度。 2)B-Tree索引利用平衡樹結構,適合範圍查詢和排序。 3)創建索引使用CREATEINDEX語句,如CREATEINDEXidx_customer_idONorders(customer_id)。 4)複合索引可優化多列查詢,如CREATEINDEXidx_customer_orderONorders(customer_id,order_date)。 5)使用EXPLAIN分析查詢計劃,避

說明如何使用MySQL中的交易來確保數據一致性。說明如何使用MySQL中的交易來確保數據一致性。Apr 24, 2025 am 12:09 AM

在MySQL中使用事務可以確保數據一致性。 1)通過STARTTRANSACTION開始事務,執行SQL操作後用COMMIT提交或ROLLBACK回滾。 2)使用SAVEPOINT可以設置保存點,允許部分回滾。 3)性能優化建議包括縮短事務時間、避免大規模查詢和合理使用隔離級別。

在哪些情況下,您可以選擇PostgreSQL而不是MySQL?在哪些情況下,您可以選擇PostgreSQL而不是MySQL?Apr 24, 2025 am 12:07 AM

選擇PostgreSQL而非MySQL的場景包括:1)需要復雜查詢和高級SQL功能,2)要求嚴格的數據完整性和ACID遵從性,3)需要高級空間功能,4)處理大數據集時需要高性能。 PostgreSQL在這些方面表現出色,適合需要復雜數據處理和高數據完整性的項目。

如何保護MySQL數據庫?如何保護MySQL數據庫?Apr 24, 2025 am 12:04 AM

MySQL數據庫的安全可以通過以下措施實現:1.用戶權限管理:通過CREATEUSER和GRANT命令嚴格控制訪問權限。 2.加密傳輸:配置SSL/TLS確保數據傳輸安全。 3.數據庫備份和恢復:使用mysqldump或mysqlpump定期備份數據。 4.高級安全策略:使用防火牆限制訪問,並啟用審計日誌記錄操作。 5.性能優化與最佳實踐:通過索引和查詢優化以及定期維護兼顧安全和性能。

您可以使用哪些工具來監視MySQL性能?您可以使用哪些工具來監視MySQL性能?Apr 23, 2025 am 12:21 AM

如何有效監控MySQL性能?使用mysqladmin、SHOWGLOBALSTATUS、PerconaMonitoringandManagement(PMM)和MySQLEnterpriseMonitor等工具。 1.使用mysqladmin查看連接數。 2.用SHOWGLOBALSTATUS查看查詢數。 3.PMM提供詳細性能數據和圖形化界面。 4.MySQLEnterpriseMonitor提供豐富的監控功能和報警機制。

MySQL與SQL Server有何不同?MySQL與SQL Server有何不同?Apr 23, 2025 am 12:20 AM

MySQL和SQLServer的区别在于:1)MySQL是开源的,适用于Web和嵌入式系统,2)SQLServer是微软的商业产品,适用于企业级应用。两者在存储引擎、性能优化和应用场景上有显著差异,选择时需考虑项目规模和未来扩展性。

在哪些情況下,您可以選擇SQL Server而不是MySQL?在哪些情況下,您可以選擇SQL Server而不是MySQL?Apr 23, 2025 am 12:20 AM

在需要高可用性、高級安全性和良好集成性的企業級應用場景下,應選擇SQLServer而不是MySQL。 1)SQLServer提供企業級功能,如高可用性和高級安全性。 2)它與微軟生態系統如VisualStudio和PowerBI緊密集成。 3)SQLServer在性能優化方面表現出色,支持內存優化表和列存儲索引。

See all articles

熱AI工具

Undresser.AI Undress

Undresser.AI Undress

人工智慧驅動的應用程序,用於創建逼真的裸體照片

AI Clothes Remover

AI Clothes Remover

用於從照片中去除衣服的線上人工智慧工具。

Undress AI Tool

Undress AI Tool

免費脫衣圖片

Clothoff.io

Clothoff.io

AI脫衣器

Video Face Swap

Video Face Swap

使用我們完全免費的人工智慧換臉工具,輕鬆在任何影片中換臉!

熱工具

VSCode Windows 64位元 下載

VSCode Windows 64位元 下載

微軟推出的免費、功能強大的一款IDE編輯器

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

強大的PHP整合開發環境

MantisBT

MantisBT

Mantis是一個易於部署的基於Web的缺陷追蹤工具,用於幫助產品缺陷追蹤。它需要PHP、MySQL和一個Web伺服器。請查看我們的演示和託管服務。

記事本++7.3.1

記事本++7.3.1

好用且免費的程式碼編輯器

mPDF

mPDF

mPDF是一個PHP庫,可以從UTF-8編碼的HTML產生PDF檔案。原作者Ian Back編寫mPDF以從他的網站上「即時」輸出PDF文件,並處理不同的語言。與原始腳本如HTML2FPDF相比,它的速度較慢,並且在使用Unicode字體時產生的檔案較大,但支援CSS樣式等,並進行了大量增強。支援幾乎所有語言,包括RTL(阿拉伯語和希伯來語)和CJK(中日韓)。支援嵌套的區塊級元素(如P、DIV),