There has always been a misunderstanding in the past, thinking that: high-performance servers must be implemented with multi-threads
The reason is very simple because of misunderstanding 2: Multi-threads must be better than single-threads Threads are highly efficient. actually not.
redis coreIf all my data is in memory, my single-threaded operation will be the most efficient. Why? Because the essence of multi-threading is that the CPU simulates multiple threads. This simulated situation has a price, which is context switching. For a memory system, it is the most efficient without context switching.
Redis uses a single CPU to bind a piece of memory data, and then when reading and writing multiple times to the data in this memory, it is all done on one CPU, so it is a single-threaded process. This matter. In the case of memory, this solution is the best solution. (Recommended: "redis video tutorial")
Because a CPU context switch takes about 1500ns.
Reading 1MB of continuous data from memory takes about 250us. Assuming that 1MB of data is read 1000 times by multiple threads, then there are 1000 time context switches.
Then there are 1500ns * 1000 = 1500us. It only takes 250us for me to read 1MB of data in a single thread. It takes 1500us for you to switch the time context alone. I don’t include the time you spend reading a little data each time.
When should we use a multi-threaded solution?
The answer is: the lower storage is slow. For example, disk
memory is a system with very high IOPS. If I want to apply for a piece of memory, I will apply for a piece of memory. If I destroy a piece of memory, I will destroy a piece of memory. It is very easy to apply for and destroy memory. Moreover, the memory can be dynamically applied for size.
The characteristics of disks are: IPOS is very low, but throughput is very high. This means that a large number of read and write operations must be collected together and then submitted to the disk to achieve the highest performance. why?
If I have a transaction group operation (that is, several separated transaction requests, such as write, read, write, read and write, so five operations together), in the memory, because the IOPS is very high, I It can be completed one by one, but if there is also this request method on the disk,
My first write operation is completed like this: I first seek the address in the hard disk, which takes about 10ms, and then I read A piece of data may take 1ms and then I calculate it again (ignored), and then write it back to the hard disk and it takes another 10ms, a total of 21ms
The second operation takes 10ms to read, and the third operation takes 21ms to write. Then I read for 10ms and write for 21ms. The five requests took a total of 83ms. This is still the most ideal situation. If it is in memory, it will be less than 1ms.
So for a disk with such a large throughput, the best solution is definitely to put N requests together in a buff and then submit them together.
The method is to use asynchronous: do not bind the request to the processing thread, the requesting thread puts the request in a buff, and then waits until the buff is almost full, and then the processing thread processes the buff. Then this buff is used to write to the disk or read the disk uniformly, so that the efficiency is the highest. Isn't this how IO in Java is done~
This processing method is the best for slow devices. Slow devices include disks, networks, SSDs, etc.,
and more Threads and asynchronous methods are very common to deal with these problems. This is what the famous netty does.
Finally made it clear why redis is single-threaded, and when to use single-threading and multi-threading. In fact, it is also a very simple thing, but it is really embarrassing when the foundation is not good. . . . .
Biyifa Master Quotations: Let’s talk about why a single-core CPU is most efficient when bound to a piece of memory
“We cannot let the operating system load balance, because we know our own programs better. , so we can manually allocate CPU cores to it without occupying the CPU too much." By default, a single thread will randomly use CPU cores when making system calls. In order to optimize Redis, we can use tools to single-thread Bind fixed CPU cores to reduce unnecessary performance losses!
Redis is a single-process model program. In order to make full use of multi-core CPUs, multiple instances are often started on one server. In order to reduce the cost of switching, it is necessary to specify the CPU on which each instance is running.
On Linux, taskset can bind a process to a specific CPU. You know your program better than the operating system, in order to avoid the scheduler scheduling your program stupidly, or to avoid the overhead of cache invalidation in multi-threaded programs.
The above is the detailed content of Why is redis single-threaded?. For more information, please follow other related articles on the PHP Chinese website!