Partitioning is the process of splitting your data into multiple Redis instances so that each instance will only contain a subset of all keys. The first part of this article will introduce you to the concept of sharding, and the second part will show you the options for Redis sharding.
What sharding can do
Redis’ sharding has two main goals:
1. Allow usage Many computers have the combined memory to support larger databases. Without sharding, you are limited to the amount of memory a single machine can support.
2. Allows scaling computing power to multiple cores or multiple servers, and scaling network bandwidth to multiple servers or multiple network adapters.
Sharding Basics
There are many different criteria for sharding. Suppose we have 4 Redis instances R0, R1, R2, R3, and many keys representing users, like user:1, user:2,... etc. We can find different ways to select a specific key to store in In which instance. In other words, there are many different ways to map a key to a specific Redis server.
One of the simplest ways to perform sharding is range partitioning, which completes sharding by mapping the range of an object to a specified Redis instance. For example, I could assume that a user enters instance R0 from ID 0 to ID 10000, a user enters instance R1 from ID 10001 to ID 20000, and so on.
This approach works and is actually used in practice. However, it has the disadvantage that it requires a table that maps ranges to instances. This table needs to be managed, and different types of objects require a table, so range sharding is often not advisable in Redis because it is much less efficient than the alternative of sharding for it.
An alternative to range sharding is hash partitioning. This mode works for any key, it does not require the key to be in the form object_name:, it is as simple as this:
1. Use a hash function (for example, crc32 hash function) to convert the key name to a number. For example, if the key is foobar, crc32(foobar) will output something like 93024922.
2. Modulo this data to convert it to a number between 0 and 3 so that this number can be mapped to one of my 4 Redis instances. 93024922 modulo 4 equals 2, so I know my key foobar should be stored to the R2 instance. Note: The modulo operation returns the remainder of the division operation, which is always implemented as the % operator in many programming languages.
There are many other ways to shard, as you can see from these two examples. An advanced form of hash sharding is called consistent hashing and is implemented by some Redis clients and brokers.
Different implementations of sharding
Sharding can be undertaken by different parts of the software stack.
1. Client side partitioning means that the client directly selects the correct node to write and read the specified key. Many Redis clients implement client-side sharding.
2. Proxy assisted partitioning means that our client sends requests to a proxy that can understand the Redis protocol, instead of sending requests directly to the Redis instance. The proxy will ensure that our requests are forwarded to the correct Redis instance according to the configured sharding mode and return a response to the client. Twemproxy, a proxy for Redis and Memcached, implements proxy-assisted sharding.
3. Query routing means that you can send your query to a random instance, and this instance will ensure that your query is forwarded to the correct node. Redis Cluster implements a hybrid form of query routing with the help of clients (requests are not forwarded directly from one Redis instance to another, but the client receives a redirect to the correct node).
Disadvantages of sharding
Some features of Redis do not play well with sharding:
1. Operations involving multiple keys are generally not supported. For example, you cannot perform an intersection on keys mapped on two different Redis instances (there is actually a way to do it, but not directly).
2. Transactions involving multiple keys cannot be used.
3. The granularity of sharding is the key, so you cannot use a large key to shard the data set, such as a large ordered set.
4. When sharding is used, data processing becomes more complex. For example, you need to process multiple RDB/AOF files. When backing up data, you need to aggregate persistent files from multiple instances and hosts.
5. Adding and deleting capacity is also complicated. For example, Redis Cluster has the ability to dynamically add and remove nodes at runtime to support transparent rebalancing of data, but other methods, such as client-side sharding and proxies, do not support this feature. However, there is a technology called presharding that can help at this point.
Data storage or caching
Although Redis sharding is conceptually the same whether Redis is used as data storage or cache, but as data There is an important limitation when it comes to storage. When Redis is used as a data store, a given key is always mapped to the same Redis instance. When Redis is used as a cache, it is not a big problem if one node is unavailable and another node is used. Changing the mapping of keys and instances according to our wishes improves the availability of the system (that is, the system's ability to answer our queries) .
Consistent hashing implementations are often able to switch to other nodes if the preferred node for a given key is unavailable. Similarly, if you add a new node, some data will start to be stored in this new node.
The main concepts here are as follows:
1. If Redis is used as a cache, it is easy to use consistent hashing to achieve scaling up and down.
2. If Redis is used as storage, use a fixed key-to-node mapping, so the number of nodes must be fixed and cannot be changed. Otherwise, when adding or deleting nodes, you need a system that supports rebalancing keys between nodes. Currently, only Redis Cluster can do this, but Redis Cluster is still in the beta stage and has not yet been considered for use in a production environment.
Pre-sharding
We already know that there is a problem with sharding. Unless we use Redis as a cache, adding and deleting nodes is a problem. A tricky thing to do, it's much simpler to use a fixed key and instance mapping.
However, data storage needs may be changing all the time. Today I can live with 10 Redis nodes (instances), but tomorrow I may need 50 nodes.
Because Redis has a relatively small memory footprint and is lightweight (an idle instance only uses 1MB of memory), a simple solution is to start many instances from the beginning. Even if you start with just one server, you can decide on day one to live in a distributed world and use sharding to run multiple Redis instances on a single server.
You can choose a large number of instances from the beginning. For example, 32 or 64 instances will satisfy most users and provide enough room for future growth.
This way, when your data storage needs to grow and you need more Redis servers, all you have to do is simply move the instance from one server to another. When you add the first server, you need to move half of the Redis instances from the first server to the second, and so on.
Using Redis replication, you can move data with little or no downtime:
1. Start an empty instance on your new server.
2. Move data and configure the new instance as the slave service of the source instance.
3. Stop your client.
4. Update the server IP address configuration of the moved instance.
5. Send the SLAVEOF NO ONE command to the slave node on the new server.
6. Start your client with the new updated configuration.
7. Finally, close the instances that are no longer used on the old server.
Redis sharding implementation
Redis cluster is the preferred method for automatic sharding and high availability. It is not yet fully available for production use, but it has entered the beta stage.
Once Redis Cluster is available, and clients that support Redis Cluster are available, Redis Cluster will become the de facto standard for Redis sharding.
Redis cluster is a hybrid model of query routing and client sharding.
Twemproxy is a proxy developed by Twitter that supports Memcached ASCII and Redis protocols. It is single-threaded, written in C, and runs very fast. It is an open source project licensed under the Apache 2.0 license.
Twemproxy supports automatic sharding across multiple Redis instances, and optional node exclusion support if the node is unavailable (this will change the mapping of keys to instances, so you should only use Redis as a cache Only use this feature).
This is not a single point of failure because you can start multiple proxies and have your clients connect to the first proxy that accepts the connection.
An alternative to Twemproxy is to use a client that implements client-side sharding through consistent hashing or other similar algorithms. There are several Redis clients that support consistent hashing, such as redis-rb and Predis.
For more redis knowledge, please pay attention to the redis database tutorial column.
The above is the detailed content of Detailed explanation of redis sharding. For more information, please follow other related articles on the PHP Chinese website!