##1234567
|
#while<a href="http://www.php.cn/wiki/121.html" target="_blank"></a>( true ){
data = socket.read();
if<a href="http://www.php.cn/wiki/109.html" target="_blank"></a>(data!= error){
Processing data
break<a href="http://www.php.cn/wiki/130.html" target="_blank"></a>;
} }
|
But there is a very serious problem with non-blocking IO. In the while loop, you need to constantly ask whether the kernel data is ready, which will cause CPU usage The rate is very high, so in general, while loops are rarely used to read data.
3. Multiplexed IO model
The multiplexed IO model is a model that is currently used more frequently. Java NIO is actually multiplexed IO.
In the multiplexed IO model, there will be a thread that continuously polls the status of multiple sockets. Only when the socket actually has read and write events, the actual IO read and write operations are actually called. Because in the multiplexed IO model, only one thread can be used to manage multiple sockets, the system does not need to create new processes or threads, nor does it need to maintain these threads and processes, and only when there are actual socket read and write events IO resources will be used only when the time is up, so it greatly reduces resource usage.
In Java NIO, selector.select() is used to querywhether there is an arrival event for each channel. If there is no event, it will always be blocked there, so this method will cause User thread blocking.
Maybe some friends will say that I can use multi-threading + blocking IO to achieve similar effects. However, in multi-threading + blocking IO, each socket corresponds to a thread, which will cause a lot of resource usage. , and especially for long connections, thread resources will never be released. If there are many connections later, it will cause a performance bottleneck.
In the multiplexed IO mode, multiple sockets can be managed through one thread. Only when the socket actually has read and write events, resources will be occupied for actual read and write operations. Therefore, multiplexed IO is more suitable for situations where the number of connections is large.
In addition, the reason why multiplexed IO is more efficient than the non-blocking IO model is because in non-blocking IO, the socket status is constantly inquired through the user thread, while in multiplexed IO , polling the status of each socket is performed by the kernel, and this efficiency is much higher than that of user threads.
However, it should be noted that the multiplexed IO model uses polling to detect whether an event has arrived, and responds to the arriving events one by one. Therefore, for the multiplexed IO model, once the event response body is large, subsequent events will not be processed for a long time, and new event polling will be affected.
4. Signal-driven IO model
In the signal-driven IO model, when the user thread initiates an IO request operation, a signal function will be registered for the corresponding socket. Then the user thread will continue to execute. When the kernel data is ready, a signal will be sent to the user thread. After receiving the signal, the user thread will call the IO read and write operations in the signal function to perform the actual IO request operation.
5. Asynchronous IO model
The asynchronous IO model is the most ideal IO model. In the asynchronous IO model, when the user thread initiates the read operation, it can start doing it immediatelyOther things. On the other hand, from the kernel's perspective, when it receives an asynchronous read, it will return immediately, indicating that the read request has been successfully initiated, so no block will be generated for the user thread. Then, the kernel will wait for the data preparation to be completed, and then copy the data to the user thread. When all this is completed, the kernel will send a signal to the user thread to tell it that the read operation is completed. In other words, the user thread does not need to know how the entire IO operation is actually performed. It only needs to initiate a request first. When it receives the success signal returned by the kernel, it means that the IO operation has been completed and the data can be used directly.
In other words, in the asynchronous IO model, the two phases of the IO operation will not block the user thread. Both phases are automatically completed by the kernel, and then a signal is sent to inform the user thread that the operation has been completed. There is no need to call the IO function again in the user thread for specific reading and writing. This is different from the signal-driven model. In the signal-driven model, when the user thread receives the signal, it indicates that the data is ready, and then the user thread needs to call the IO function to perform the actual read and write operations; in the asynchronous IO model, Receiving the signal indicates that the IO operation has been completed, and there is no need to call the iO function in the user thread to perform actual read and write operations.
Note that asynchronous IO requires the underlying support of the operating system. In Java 7, Asynchronous IO is provided.
The first four IO models are actually synchronous IO, and only the last one is truly asynchronous IO, because whether it is multiplexed IO or signal-driven model, the second stage of IO operation will cause The user thread is blocked, that is, the process of data copying by the kernel will cause the user thread to be blocked.
6. Two high-performance IO design patterns
Among the traditional network service design patterns, there are two classic patterns:
One is multi-threading, One is the thread pool.
For multi-threaded mode, that is to say, when the client comes, the server will create a new thread to handle the read and write events of the client, as shown in the following figure:
Although this mode is simple and convenient to handle, because the server uses a thread to process each client connection, it takes up a lot of resources. Therefore, when the number of connections reaches the upper limit, and another user requests a connection, it will directly cause a resource bottleneck, and in severe cases, it may directly cause the server to crash.
Therefore, in order to solve the problem caused by one thread corresponding to one client mode, the thread pool method is proposed, which means to create a thread pool of a fixed size, and when a client comes, it starts from The thread pool takes an idle thread for processing. When the client completes the read and write operations, it hands over the occupation of the thread. Therefore, this avoids the waste of resources caused by creating threads for each client, so that threads can be reused.
But the thread pool also has its drawbacks. If most of the connections are long connections, it may cause all the threads in the thread pool to be occupied for a period of time. Then when another user requests a connection, because there is no If the available idle threads are used for processing, the client connection will fail, thus affecting the user experience. Therefore, the thread pool is more suitable for a large number of short connection applications.
Therefore, the following two high-performance IO design patterns have emerged: Reactor and Proactor.
In the Reactor mode, events of interest will be registered for each client first, and then a thread will poll each client to see if an event occurs. When an event occurs, each client will be processed sequentially. Events, when all events are processed, they will be transferred to continue polling, as shown in the following figure:
As can be seen from here, the above The multiplexed IO among the five IO models adopts the Reactor mode. Note that the above figure shows that each event is processed sequentially. Of course, in order to improve the event processing speed, events can be processed through multi-threads or thread pools.
In Proactor mode, when an event is detected, a new asynchronous operation will be started and then handed over to the kernel thread for processing. When the kernel thread completes the IO operation, a notification will be sent to inform that the operation has been completed. It can be known that the asynchronous IO model uses the Proactor mode.