Home >Operation and Maintenance >Linux Operation and Maintenance >Redis experience you need to know for Linux operation and maintenance
#Redis is very popular in the current technology community. Redis has come a long way from a small personal project from Antirez to becoming the industry standard for in-memory data storage. The resulting set of best practices allows most people to use Redis correctly.
Below we will explore 10 experiences in using Redis correctly.
Okay, starting this article by challenging this command may not be a good way, but it may indeed be the most important point. Many times when we pay attention to the statistics of a redis instance, we will quickly enter the "KEYS *" command so that the key information will be clearly displayed. To be fair, from a programming perspective, we tend to write pseudocode like the following:
for key in'keys *': doAllTheThings()
But when you have 13 million keys, the execution speed will slow down. Because the time complexity of the KEYS command is O(n), where n is the number of keys to be returned, the complexity of this command depends on the size of the database. And during the execution of this operation, no other commands can be executed in your instance.
As an alternative command, take a look at SCAN, which allows you to perform in a more friendly way... SCAN scans the database in an incremental iteration. This operation is done based on the cursor's iterator, so you can stop or continue at any time as you see fit.
Since Redis does not have very detailed logs, it is very difficult to know what is done inside the Redis instance. Fortunately, Redis provides a command statistics tool like the following:
127.0.0.1:6379> INFO commandstats # Commandstats cmdstat_get:calls=78,usec=608,usec_per_call=7.79 cmdstat_setex:calls=5,usec=71,usec_per_call=14.20 cmdstat_keys:calls=2,usec=42,usec_per_call=21.00 cmdstat_info:calls=10,usec=1931,usec_per_call=193.10
Through this tool, you can view a snapshot of all command statistics, such as how many times the command has been executed and the number of milliseconds it took to execute the command (each command The total time and average time) can be reset by simply executing the CONFIG RESETSTAT command, so that you can get a brand new statistical result.
Redis 之父 Salvatore 就说过:“通过执行GET/SET命令来测试Redis就像在雨天检测法拉利的雨刷清洁镜子的效果”。很多时候人们跑到我这里,他们想知道为什么自己的Redis-Benchmark统计的结果低于最优结果 。但我们必须要把各种不同的真实情况考虑进来,例如:
Redis-Benchmark的测试结果提供了一个保证你的 Redis-Server 不会运行在非正常状态下的基准点,但是你永远不要把它作为一个真实的“压力测试”。压力测试需要反应出应用的运行方式,并且需要一个尽可能的和生产相似的环境。
以一种优雅的方式引入 hashes 吧。hashes 将会带给你一种前所未有的体验。之前我曾看到过许多类似于下面这样的key结构:
foo:first_name foo:last_name foo:address
上面的例子中,foo 可能是一个用户的用户名,其中的每一项都是一个单独的 key。这就增加了 犯错的空间,和一些不必要的 key。使用 hash 代替吧,你会惊奇地发现竟然只需要一个 key :
127.0.0.1:6379> HSET foo first_name 'Joe' (integer) 1 127.0.0.1:6379> HSET foo last_name 'Engel' (integer) 1 127.0.0.1:6379> HSET foo address '1 Fanatical Pl' (integer) 1 127.0.0.1:6379> HGETALL foo 1) 'first_name' 2) 'Joe' 3) 'last_name' 4) 'Engel' 5) 'address' 6) '1 Fanatical Pl' 127.0.0.1:6379> HGET foo first_name 'Joe'
无论什么时候,只要有可能就利用key超时的优势。一个很好的例子就是储存一些诸如临时认证key之类的东西。当你去查找一个授权key时——以OAUTH为例——通常会得到一个超时时间。这样在设置key的时候,设成同样的超时时间,Redis就会自动为你清除!而不再需要使用KEYS *
来遍历所有的key了,怎么样很方便吧?
Now that we have talked about the topic of clearing keys, let’s talk about the recycling strategy. When the Redis instance space is filled up, it will try to reclaim some keys. Depending on your usage, I strongly recommend using the volatile-lru strategy - provided you have set a timeout on the key. But if you are running something similar to a cache and do not set a timeout mechanism for keys, you can consider using the allkeys-lru recycling mechanism. My suggestion is to check out what's possible here first.
If you must ensure that critical data can be put into a Redis instance, I strongly recommend that you put it in in a try/except block. Almost all Redis clients adopt the "send and forget" strategy, so it is often necessary to consider whether a key is actually placed in the Redis database. The complexity of putting try/expect into Redis commands is not what this article is about. You just need to know that doing so ensures that important data is placed where it should be.
Whenever possible, spread the workload of multiple redis instances. Starting from version 3.0.0, Redis supports clusters. Redis Cluster allows you to separate out some keys that contain master/slave modes based on key ranges. The complete "magic" behind clustering can be found here. But if you are looking for tutorials, this is the perfect place. If clustering isn't an option, consider namespaces and spreading your keys across multiple instances. Regarding how to distribute data, there is this excellent review on the redis.io website.
Of course it is wrong. Redis is a single-threaded process and only consumes a maximum of two cores even with persistence enabled. Unless you plan to run multiple instances on a single host - hopefully only in a development and test environment! ——Otherwise, there is no need for more than 2 cores for a Redis instance.
So far, Redis Sentinel has been thoroughly tested, and many users have applied it to production environments (including ObjectRocket). If your application relies heavily on Redis, you need to come up with a high availability solution to ensure that it does not go offline. Of course, if you don’t want to manage these things yourself, ObjectRocket provides a high-availability platform and 7×24-hour technical support. If you are interested, you can consider it.
The above is the detailed content of Redis experience you need to know for Linux operation and maintenance. For more information, please follow other related articles on the PHP Chinese website!