1, Background
With the development of the Internet, the scale of website applications continues to expand, and the conventional vertical application architecture can no longer cope with the distribution It is imperative to adopt a centralized service architecture and mobile computing architecture, and a governance system is urgently needed to ensure the orderly evolution of the architecture
Single application architecture
When the website traffic is very small, only one application is needed to deploy all functions together to reduce deployment nodes and costs
At this time, data access is used to simplify the workload of adding, deleting, modifying, and checking Framework (ORM) is the key
Vertical application architecture
When the number of visits gradually increases, the acceleration caused by adding machines to a single application becomes more and more The smaller it is, split the application into several unrelated applications to improve efficiency
At this time, the Web framework (MVC) used to accelerate front-end page development is the key
Distributed service architecture
Split by business line
Stop RPC abuse, give priority to vertical businesses Through local jar calls, RPC calls are used across businesses
Correctly identify the ownership of business logic, maximize the cohesion of each module, and reduce coupling in terms of performance, availability and maintainability
Only deploy some servers for each release
Each node can be scaled and expanded according to different needs
Updating, deploying, and running between each application will not affect
Deployment separation
Team separation
Data separation
When there are more and more vertical applications, interactions between applications are inevitable, and the core business must be extracted as an independent services, gradually forming a stable service center, allowing front-end applications to respond more quickly to changing market demands
At this time, a distributed service framework is used to improve business reuse and integration (RPC) is the key
Distributed Service RPC Framework
##Flow Computing Architecture
When there are more and more services, problems such as capacity evaluation and waste of small service resources gradually emerge. At this time, it is necessary to add a dispatch center to manage real-time management based on access pressure. Cluster capacity, improve cluster utilization
At this time, the resource scheduling and governance center (SOA) used to improve machine utilization is the key
Netty thread ModelNetty’s threading model is mainly based on React, and has evolved into multiple versions due to different application scenarios. Single-threaded modeThat is, receiving service requests and performing IO operations are all completed by one thread. Since non-blocking IO operations such as IO multiplexing are used, the amount of requests increases. In small cases, single-threaded mode can also solve some scene problems. Single receiving multi-worker thread modeWhen the number of requests increases, the original one thread processing all IO operations becomes increasingly unsupportable Corresponding performance indicators, so the concept of a working thread pool is mentioned. At this time, receiving the service request is still a thread. After receiving the request, the thread receiving the request will be entrusted to the subsequent working thread pool and obtain a thread from the thread pool for execution. User request. Multiple receiving and multi-worker thread modeWhen the request volume further increases, a single thread that receives service requests cannot handle all client connections, so The thread pool that receives service requests is also expanded, and multiple threads are responsible for receiving client connections at the same time. RPC Business ThreadThe above mentioned are Netty’s own threading models, optimization strategies that have been continuously developed with the increase in request volume. For RPC requests, the most important thing for application systems is the processing of business logic, and this type of business may be computationally intensive or IO-intensive. For example, most applications are accompanied by database operations, redis or other connections. Network services, etc. If there are such time-consuming IO operations in the business request, it is recommended to allocate the task of processing the business request to an independent thread pool, otherwise netty's own threads may be blocked.Division of work between the request thread and the worker thread
The request thread is mainly responsible for creating the link and then delegating the request to the worker thread
The working thread is responsible for encoding, decoding, reading IO and other operations
The RPC I currently implement uses the multi-receiver multi-worker thread mode. The port is bound like this on the server side:
public void bind(ServiceConfig serviceConfig) {EventLoopGroup bossGroup = new NioEventLoopGroup();EventLoopGroup workerGroup = new NioEventLoopGroup();try {ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(this.rpcServerInitializer)
.childOption(ChannelOption.SO_KEEPALIVE,true)
;try {ChannelFuture channelFuture = bootstrap.bind(serviceConfig.getHost(),serviceConfig.getPort()).sync();//...channelFuture.channel().closeFuture().sync();
} catch (InterruptedException e) {throw new RpcException(e);
}
}finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
boosGroup is a group used to receive service requests
workerGroup is a group specifically responsible for IO operations
To add business threads, you only need to further delegate the handle operation to the thread pool. For expansion, you need to define an interface here:
public interface RpcThreadPool {Executor getExecutor(int threadSize,int queues);
}
Referenced the dubbo thread pool
@Qualifier("fixedRpcThreadPool")@Componentpublic class FixedRpcThreadPool implements RpcThreadPool {private Executor executor;@Overridepublic Executor getExecutor(int threadSize,int queues) {if(null==executor) {synchronized (this) {if(null==executor) {
executor= new ThreadPoolExecutor(threadSize, threadSize, 0L, TimeUnit.MILLISECONDS,
queues == 0 ? new SynchronousQueue<Runnable>() :(queues < 0 ? new LinkedBlockingQueue<Runnable>(): new LinkedBlockingQueue<Runnable>(queues)),new RejectedExecutionHandler() {@Overridepublic void rejectedExecution(Runnable r, ThreadPoolExecutor executor) { //...}
});
}
}
}return executor;
}
}
Interlude:
Note One time, a friend suddenly asked what the coreSize in the Java thread pool meant? I was suddenly short-circuited, because I don't usually write multi-threading. When I think of the database thread pool that I usually use a lot, I am quite impressed by the parameters in it, but I just can't remember coreSize. Later, I took a closer look at some parameters of the thread pool. Now I can take this opportunity to take a closer look to avoid short-circuiting again.
When there are multiple thread pool implementations, the thread pool is dynamically selected through the thread pool name.
@Componentpublic class RpcThreadPoolFactory {@Autowiredprivate Map<String,RpcThreadPool> rpcThreadPoolMap;public RpcThreadPool getThreadPool(String threadPoolName){return this.rpcThreadPoolMap.get(threadPoolName);
}
}
Wrap the method body into a Task and hand it over to the thread pool for execution.
@Overrideprotected void channelRead0(ChannelHandlerContext channelHandlerContext, RpcRequest rpcRequest) {this.executor.execute(new Runnable() {@Overridepublic void run() {RpcInvoker rpcInvoker=RpcServerInvoker.this.buildInvokerChain(RpcServerInvoker.this);RpcResponse response=(RpcResponse) rpcInvoker.invoke(RpcServerInvoker.this.buildRpcInvocation(rpcRequest));
channelHandlerContext.writeAndFlush(response);
}
});
}
Currently there is a lack of stress testing, so there is no clear data comparison for the time being.
The above is the detailed content of Detailed examples of RPC framework. For more information, please follow other related articles on the PHP Chinese website!