Home >Java >javaTutorial >What is the netty principle in Java?
1. Introduction to Netty
Netty is a high-performance, asynchronous event-driven NIO framework based on JAVA NIO Provided API implementation. It provides support for TCP, UDP and file transfer. As an asynchronous NIO framework, all IO operations of Netty are asynchronous and non-blocking. Through the Future-Listener mechanism, users can easily obtain IO operations actively or through the notification mechanism. result. As the most popular NIO framework currently, Netty has been widely used in the Internet field, big data distributed computing, gaming industry, communications industry, etc. Some well-known open source components in the industry are also built based on Netty's NIO framework.
2. Netty thread model
In terms of JAVA NIO, Selector provides the foundation for the Reactor mode. Netty combines the Selector and Reactor modes to design an efficient thread model. Let’s take a look at the Reactor pattern first:
2.1 Reactor pattern
Wikipedia explains the Reactor model this way: “The reactor design pattern is an event handling pattern for handling service requests delivered concurrently by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to associated request handlers." First of all, the Reactor mode is event-driven. There are one or more concurrent input sources, a Server Handler and multiple Request Handlers. This Service Handler will synchronously multiplex the input requests and distribute them to the corresponding Request Handlers. It can be shown in the figure below:
The structure is somewhat similar to the producer and consumer models, that is, one or more producers put events into a Queue, and a Or multiple consumers actively poll events from this queue for processing; while the Reactor mode does not have a Queue for buffering. Whenever an event is input to the Service Handler, the Service Handler will actively distribute it to different events according to different Event types. The corresponding Request Handler is used to handle it.
2.2 Implementation of Reator mode
Regarding Java NIO constructing Reator mode, Doug lea gave a good explanation in "Scalable IO in Java". Here is an interception of the PPT to implement the Reator mode. Description
1. The first implementation model is as follows:
This is the simplest Reactor single-thread model, because the Reactor mode uses asynchronous non-blocking IO, all IO operations will not be blocked. In theory, one thread can handle all IO operations independently. At this time, the Reactor thread is a generalist, responsible for detaching multiple sockets, accepting new connections, and distributing requests to the processing chain.
For some small-capacity application scenarios, the single-threaded model can be used. However, it is not suitable for high-load, large-concurrency applications. The main reasons are as follows:
(1) When a NIO thread processes hundreds or thousands of links at the same time, the performance cannot be supported, even if the CPU of the NIO thread Even if the load reaches 100%, the message cannot be fully processed
(2) When the NIO thread is overloaded, the processing speed will slow down, which will cause a large number of client connections to time out. After the timeout, the message will often be retransmitted, which is even more heavy. Reduce the load of the NIO thread.
(3) Low reliability. An unexpected infinite loop of a thread will cause the entire communication system to be unavailable.
In order to solve these problems, the Reactor multi-threading model emerged.
2.Reactor multi-threading model:
Compared with the previous model, this model uses multi-threading (thread pool) in the processing chain part.
In most scenarios, this model can meet performance requirements. However, in some special application scenarios, for example, the server will perform security authentication on the client's handshake message. In such scenarios, a single Acceptor thread may suffer from insufficient performance. In order to solve these problems, the third Reactor thread model was produced.
Related recommendations: "java Development Tutorial"
3.Reactor master-slave model
Compared with the second model, this model divides the Reactor into two parts. The mainReactor is responsible for monitoring the server socket and accepting new connections; and assigns the established socket to the subReactor. The subReactor is responsible for demultiplexing connected sockets, reading and writing network data, and throwing it to the worker thread pool for business processing functions. Usually, the number of subReactors can be equal to the number of CPUs.
2.3 Netty model
2.2 has finished talking about the three models of Reactor, so which one is Netty? In fact, Netty's thread model is a variant of the Reactor model, which is the third form of variant that removes the thread pool. This is also the default mode of Netty NIO. The participants of Reactor mode in Netty mainly include the following components:
(1) Selector
(2) EventLoopGroup/EventLoop
(3) ChannelPipeline
Selector is the SelectableChannel multiplexer provided in NIO, which plays the role of demultiplexer. I will not go into details here. The other two functions and their roles in Netty's Reactor mode are introduced below.
3. EventLoopGroup/EventLoop
When the system is running, frequent thread context switching will cause additional performance losses. When multiple threads execute a business process concurrently, business developers also need to be vigilant about thread safety at all times. What data may be modified concurrently and how to protect it? This not only reduces development efficiency, but also causes additional performance losses.
In order to solve the above problems, Netty adopts the serialization design concept. From the reading of messages, encoding and subsequent execution of Handler, the IO thread EventLoop is always responsible, which means that the entire process will not proceed. When thread context is switched, the data will not face the risk of being modified concurrently. This also explains why the Netty thread model removes the thread pool in the Reactor master-slave model.
EventLoopGroup is an abstraction of a group of EventLoops. EventLoopGroup provides the next interface, which can obtain one of the EventLoops in a group of EventLoops according to certain rules to process tasks. What you need to know about EventLoopGroup here is that it is in Netty and in the Netty server. In programming, we need two EventLoopGroups, BossEventLoopGroup and WorkerEventLoopGroup, to work. Usually a service port, that is, a ServerSocketChannel, corresponds to a Selector and an EventLoop thread, which means that the thread number parameter of BossEventLoopGroup is 1. BossEventLoop is responsible for receiving the client's connection and handing SocketChannel to WorkerEventLoopGroup for IO processing.
The implementation of EventLoop acts as the Dispatcher in the Reactor pattern.
4. ChannelPipeline
ChannelPipeline actually plays the role of request processor in Reactor mode.
The default implementation of ChannelPipeline is DefaultChannelPipeline. DefaultChannelPipeline itself maintains a user-invisible tail and head ChannelHandler, which are located at the head and tail of the linked list queue respectively. The tail is in the upper part, and the head is in the direction closer to the network layer. There are two important interfaces for ChannelHandler in Netty, ChannelInBoundHandler and ChannelOutBoundHandler. Inbound can be understood as the flow of network data from the outside to the inside of the system, and outbound can be understood as the flow of network data from the inside of the system to the outside of the system. The ChannelHandler implemented by the user can implement one or more of the interfaces as needed and put it into the linked list queue in the Pipeline. The ChannelPipeline will find the corresponding Handler to process according to different IO event types. At the same time, the linked list queue is in the chain of responsibility mode. A variant, top-down or bottom-up, all Handlers that satisfy the event correlation will process the event.
ChannelInBoundHandler processes the messages sent from the client to the server. It is generally used to perform half-packet/sticky packets, decoding, reading data, business processing, etc.; ChannelOutBoundHandler processes the messages sent from the server to the client. Processing, generally used to encode and send messages to the client.
The following figure is an illustration of the execution process of ChannelPipeline:
For more knowledge about Pipeline, please refer to: A Brief Talk on Pipeline Model (Pipeline)
5. Buffer
The extended Buffer provided by Netty has many advantages over NIO. As a very important part of data access, let’s take a look at Netty’s What are the characteristics of Buffer?
1.ByteBuf read and write pointers
In ByteBuffer, the read and write pointers are position, while in ByteBuf, the read and write pointers are readerIndex and writerIndex respectively. Intuitively, ByteBuffer only uses One pointer realizes the functions of two pointers, saving variables. However, when switching the read and write status of ByteBuffer, the flip method must be called. Before writing next time, the contents of the Buffer must be read and then called. clear method. Call flip before each reading and clear before writing. This undoubtedly brings tedious steps to development, and the content cannot be written until the content is read, which is very inflexible. In contrast, let's look at ByteBuf. When reading, it only relies on the readerIndex pointer. When writing, it only relies on the writerIndex pointer. There is no need to call the corresponding method before each read and write, and there is no limit to reading it all at once.
2.Zero copy
(1) Netty uses DIRECT BUFFERS to receive and send ByteBuffer, using off-heap direct memory for Socket reading and writing, without the need for a secondary copy of the byte buffer. If you use traditional heap memory (HEAP BUFFERS) for Socket reading and writing, the JVM will copy the heap memory Buffer to direct memory and then write it to the Socket. Compared with direct memory outside the heap, the message has an additional memory copy of the buffer during the sending process.
(2) Netty provides a combined Buffer object, which can aggregate multiple ByteBuffer objects. Users can operate the combined Buffer as conveniently as operating a Buffer, avoiding the traditional method of memory copying. Buffers are merged into one large Buffer.
(3) Netty's file transfer uses the transferTo method, which can directly send the data in the file buffer to the target Channel, avoiding the memory copy problem caused by the traditional cyclic write method.
3. Reference counting and pooling technology
In Netty, each applied Buffer may be a very valuable resource for Netty, so in order to obtain memory application and To reclaim more control, Netty implements memory management based on reference counting. Netty's use of Buffer is based on direct memory (DirectBuffer), which greatly improves the efficiency of I/O operations. However, in addition to the high efficiency of I/O operations, DirectBuffer and HeapBuffer also have a natural shortcoming, that is, Application for DirectBuffer is less efficient than HeapBuffer, so Netty implements PolledBuffer in combination with reference counting, that is, pooling usage. When the reference count is equal to 0, Netty will recycle the Buffer into the pool, and there will be no one who applies for the Buffer next time. Moments will be reused.
Summary
Netty is essentially the implementation of the Reactor pattern, with Selector as a multiplexer, EventLoop as a repeater, and Pipeline as an event processor. But unlike ordinary Reactors, Netty uses serialization and uses the chain of responsibility model in Pipeline.
The buffer in Netty has been optimized compared to the buffer in NIO, which greatly improves performance.
The above is the detailed content of What is the netty principle in Java?. For more information, please follow other related articles on the PHP Chinese website!