Home >Java >javaTutorial >Solving high concurrency problems in Java systems

Solving high concurrency problems in Java systems

黄舟
黄舟Original
2017-09-21 10:22:561284browse

This article mainly introduces the high concurrency solution of Java system. The content is very rich. I will share it with you here. Friends in need can refer to it.

A small website, such as a personal website, can be implemented using the simplest html static page, with some pictures to achieve a beautifying effect. All pages are stored in a directory. Such a website has a negative impact on the system architecture. , performance requirements are very simple. With the continuous enrichment of Internet businesses, website-related technologies have been subdivided into very fine aspects after years of development. Especially for large websites, the technologies used are even more involved. Very wide, from hardware to software, programming language, mysql" target="_blank" title="MySQL Knowledge Base"> database, WebServer, firewall and other fields have very high requirements, it is no longer the original simple static html Websites can match.

Large websites, such as portals, face a large number of user visits and high concurrent requests. The basic solutions focus on the following links: Use high-performance servers. , high-performance databases, high-efficiency programming languages, and high-performance web containers, but apart from these aspects, they cannot fundamentally solve the high load and high concurrency problems faced by large websites.

The several solution ideas provided above also mean greater investment to a certain extent, and such solution ideas have bottlenecks and do not have good scalability. Below I will look at them from the perspective of low cost, high performance and high scalability. Let’s talk about some of my experiences

1. HTML static

##In fact, everyone knows that the most efficient and least consumed method is pure HTML. Static html pages, so we try to use static pages for the pages on our website. This simplest method is actually the most effective method. However, for websites with a large amount of content and frequent updates, we cannot do it all manually. Implemented one by one, so our common information release system CMS appeared. The news channels of various portal sites we often visit, and even their other channels, are managed and implemented through the information release system. The information release system can achieve the most Simple information entry can automatically generate static pages, and it can also have functions such as channel management, rights management, and automatic crawling. For a large website, it is essential to have an efficient and manageable CMS. For portal and information publishing type websites, for community type websites with high interactivity requirements, being as static as possible is also a necessary means to improve performance. Posts and articles in the community should be made static in real time, and when there are updates Staticization is also a widely used strategy. Mop's hodgepodge uses this strategy, as does NetEase community.



At the same time, html staticization is also a method used by some caching strategies. For applications in the system that frequently use database queries but have very small content updates, you can consider using HTML static implementation, such as public setting information in forums. This information can be managed in the background by current mainstream forums and stored in the database. In fact, a large amount of this information is called by the foreground program, but the update frequency is very small. You can consider making this part of the content static during background updates, thus avoiding a large number of database access requests.


2. Image server separation


As we all know, for web servers, whether it is Apache, IIS or other containers, images are the most important It consumes resources, so we need to separate images from pages. This is a strategy basically adopted by large websites. They all have independent image servers, or even many image servers. Such an architecture can reduce the pressure on the server system that provides page access requests, and can ensure that the system will not crash due to picture problems. Different configuration optimizations can be performed on the application server and picture server. For example, apache can try its best when configuring ContentType. Support as little as possible and use as few LoadModules as possible to ensure higher system consumption and execution efficiency.


3. Database cluster and database table hashing


Large websites have complex applications, and these applications must use databases, so in the face of When a large number of accesses occur, the bottleneck of the database will soon appear. At this time, one database will soon be unable to meet the application, so we need to use database clustering or database table hashing.


In terms of database clusters, many databases have their own solutions. Oracle, Sybase, etc. have good solutions. The commonly used Master/Slave provided by MySQL is also a similar solution. You have used What kind of DB can be implemented by referring to the corresponding solution.

The database cluster mentioned above will be limited by the type of DB used in terms of architecture, cost, and scalability, so we need to consider improving the system architecture from the perspective of the application. Library table hashing is commonly used and the most effective. s solution. We install business and application or functional modules in the application to separate the database. Different modules correspond to different databases or tables, and then use certain strategies to perform smaller database hashes on a certain page or function, such as the user table. Hash the table according to user ID, which can improve the performance of the system at low cost and have good scalability. Sohu's forum adopts such a structure, which separates the forum's users, settings, posts and other information into a database, and then hashes the posts and users into databases and tables according to sections and IDs. Finally, it can be simply configured in the configuration file. This allows the system to add a low-cost database at any time to supplement system performance.

4. Caching

Anyone who is engaged in technology has been exposed to the word cache, and cache is used in many places. Caching in website architecture and website development is also very important. Here we first talk about the two most basic caches. Advanced and distributed caching are described later.

Architecture-wise caching, anyone familiar with Apache will know that Apache provides its own caching module, and you can also use the additional Squid module for caching. Both methods can effectively improve Apache's access responsiveness.

Cache in website program development, Memory Cache provided on Linux is a commonly used cache interface, which can be used in web development. For example, when developing in Java, you can call MemoryCache to cache some data. and communication sharing, some large communities use this architecture. In addition, when using web language development, each language basically has its own cache module and method. PHP has Pear's Cache module, and Java has even more. I am not very familiar with .net, but I believe it must be there.

5. Mirroring

Mirroring is a method often used by large websites to improve performance and data security. Mirroring technology can solve the problem of different networks. Differences in user access speeds caused by access providers and regions, such as the difference between ChinaNet and EduNet, have prompted many websites to build mirror sites within the education network, and the data is updated regularly or in real time. In terms of the detailed technology of mirroring, I will not go into too much detail here. There are many professional off-the-shelf solution architectures and products to choose from. There are also cheap ways to implement it through software, such as rsync and other tools on Linux.

6. Load balancing

Load balancing will be the ultimate solution for large websites to solve high-load access and a large number of concurrent requests.

Load balancing technology has been developed for many years, and there are many professional service providers and products to choose from. I have personally come across some solutions, and two of them can be used as a reference.

1) Hardware Layer 4 Switching

Layer 4 switching uses the header information of Layer 3 and Layer 4 packets, depending on the application The interval identifies the business flow and allocates the business flow of the entire interval segment to the appropriate application server for processing. The layer 4 switching function is like a virtual IP, pointing to the physical server. The services it transmits obey a variety of protocols, including HTTP, FTP, NFS, Telnet or other protocols. These services require complex load balancing algorithms based on physical servers. In the IP world, the service type is determined by the terminal TCP or UDP port address. In Layer 4 switching, the application range is determined by the source and terminal IP addresses, TCP and UDP ports.

In the field of hardware four-layer switching products, there are some well-known products to choose from, such as Alteon, F5, etc. These products are very expensive, but they are worth the money and can provide very excellent performance and very high performance. Flexible management capabilities. Yahoo China used three or four Alteons to handle nearly 2,000 servers.

2) Software four-layer switching

#After everyone knows the principle of hardware four-layer switch, software four-layer is implemented based on the OSI model Exchange came into being. Such a solution implements the same principle, but the performance is slightly worse. However, it is still easy to meet a certain amount of pressure. Some people say that the software implementation method is actually more flexible, and the processing power depends entirely on the familiarity of your configuration.
We can use the commonly used LVS on Linux to solve the four-layer switching of software. LVS is Linux Virtual Server. It provides a real-time disaster response solution based on the heartbeat line, which improves the robustness of the system and provides flexibility. The virtual VIP configuration and management functions can meet multiple application requirements at the same time, which is essential for distributed systems.

A typical load balancing strategy is to build a Squid cluster based on software or hardware four-layer switching. This idea is adopted by many large websites, including search engines. Such an architecture Low cost, high performance and strong scalability, it is very easy to add or remove nodes to the architecture at any time. I am going to sort out this structure in detail and discuss it with you.

1: Database that focuses on high-concurrency and high-load websites

Yes, the first is the database, which is the first SPOF faced by most applications. Especially for Web2.0 applications, the response of the database must be solved first.

Generally speaking, MySQL is the most commonly used. It may initially be a mysql host. When the data increases to more than 1 million, then the performance of MySQL drops sharply. A common optimization measure is the M-S (master-slave) mode for synchronous replication, where queries and operations are performed on different servers. What I recommend is the M-M-Slaves method, 2 master Mysql, multiple Slaves. It should be noted that although there are 2 Masters, only one is Active at the same time. We can switch at a certain time. The reason for using two M is to ensure that M will not become the SPOF of the system again.

Slaves can be further load balanced and can be combined with LVS to appropriately balance select operations to different slaves.

The above architecture can cope with a certain amount of load, but as the number of users further increases, your user table data exceeds 10 million, and then the M becomes SPOF. You cannot expand Slaves arbitrarily, otherwise the cost of replication synchronization will skyrocket. What should I do? My method is table partitioning, partitioning from a business level. The simplest, take user data as an example. According to a certain segmentation method, such as ID, it is segmented into different database clusters.

The global database is used for meta data query. The disadvantage is that each time you query, it will be added once. For example, if you want to query a user nightsailer, you must first go to the global database group to find the cluster id corresponding to nightsailer, and then go to the specified cluster to find the actual data of nightsailer.

Each cluster can use m-m mode or m-m-slaves mode. This is a scalable structure. As the load increases, you can simply add new mysql clusters.

It should be noted that:

1. Disable all auto_increment fields

2. ID needs to be used Universal algorithm centralized allocation

3. It is necessary to have a better method to monitor the load of the mysql host and the running status of the service. If you have more than 30 mysql databases running, you will understand what I mean.

4. Do not use persistent links (do not use pconnect). Instead, use a third-party database connection pool such as sqlrelay, or simply do it yourself, because the mysql connection pool in php4 often has problems. .

2: HTML static system architecture for high-concurrency and high-load websites

In fact, everyone knows that the most efficient and least consumed is Purely static pages, so we try our best to use static pages for the pages on our website. This simplest method is actually the most effective method. However, for websites with a large amount of content and frequent updates, we cannot implement them all manually one by one, so our common information publishing system CMS appeared, such as the news channels of various portal sites we often visit, and even their other channels, all through It is managed and implemented by the information release system. The information release system can realize the simplest information entry and automatically generate static pages. It can also have functions such as channel management, permission management, automatic crawling, etc. For a large website, it has an efficient set of , a manageable CMS is essential. In addition to portals and information publishing websites, for community-type websites with high interactivity requirements, being as static as possible is also a necessary means to improve performance. Posts and articles in the community can be made static in real time, and updated Re-staticizing is also a widely used strategy. Mop's hodgepodge uses this strategy, as does NetEase communities.

At the same time, html staticization is also a means used by some caching strategies. For applications in the system that frequently use database queries but have very small content updates, you can consider using html staticization, such as in forums. The public setting information of the forum. This information can be managed by the current mainstream forums and stored in the database. In fact, a large amount of this information is called by the front-end program, but the update frequency is very small. You can consider updating this part of the content in the background. Static, this avoids high concurrency of a large number of database access requests.

Website HTML static solution

When a Servlet resource request arrives at the WEB server, we will fill in the specified JSP page to respond to the request:

HTTP request---Web server---Servlet--Business logic processing--Access data--Fill JSP--Response request

After HTML staticization:

HTTP request---Web server---Servlet--HTML--Response request

Static access request is as follows

Servlet:


public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
if(request.getParameter("chapterId") != null){
String chapterFileName = "bookChapterRead_"+request.getParameter("chapterId")+".html";
String chapterFilePath = getServletContext().getRealPath("/") + chapterFileName;
File chapterFile = new File(chapterFilePath);
if(chapterFile.exists()){response.sendRedirect(chapterFileName);return;}//如果有这个文件就告诉浏览器转向
INovelChapterBiz novelChapterBiz = new NovelChapterBizImpl();
NovelChapter novelChapter = novelChapterBiz.searchNovelChapterById(Integer.parseInt(request.getParameter("chapterId")));//章节信息
int lastPageId = novelChapterBiz.searchLastCHapterId(novelChapter.getNovelId().getId(), novelChapter.getId());
int nextPageId = novelChapterBiz.searchNextChapterId(novelChapter.getNovelId().getId(), novelChapter.getId());
request.setAttribute("novelChapter", novelChapter);
request.setAttribute("lastPageId", lastPageId);
request.setAttribute("nextPageId", nextPageId);
new CreateStaticHTMLPage().createStaticHTMLPage(request, response, getServletContext(),
chapterFileName, chapterFilePath, "/bookRead.jsp");
}
}

Classes for generating HTML static pages:


package com.jb.y2t034.thefifth.web.servlet;
import java.io.ByteArrayOutputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;
import javax.servlet.RequestDispatcher;
import javax.servlet.ServletContext;
import javax.servlet.ServletException;
import javax.servlet.ServletOutputStream;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpServletResponseWrapper;
/**
* 创建HTML静态页面
* 功能:创建HTML静态页面
* 时间:2009年1011日
* 地点:home
* @author mavk
*
*/
public class CreateStaticHTMLPage {
/**
* 生成静态HTML页面的方法
* @param request 请求对象
* @param response 响应对象
* @param servletContext Servlet上下文
* @param fileName 文件名称
* @param fileFullPath 文件完整路径
* @param jspPath 需要生成静态文件的JSP路径(相对即可)
* @throws IOException
* @throws ServletException
*/
public void createStaticHTMLPage(HttpServletRequest request, HttpServletResponse response,ServletContext servletContext,String fileName,String fileFullPath,String jspPath) throws ServletException, IOException{
response.setContentType("text/html;charset=gb2312");//设置HTML结果流编码(即HTML文件编码)
RequestDispatcher rd = servletContext.getRequestDispatcher(jspPath);//得到JSP资源
final ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();//用于从ServletOutputStream中接收资源
final ServletOutputStream servletOuputStream = new ServletOutputStream(){//用于从HttpServletResponse中接收资源
public void write(byte[] b, int off,int len){
byteArrayOutputStream.write(b, off, len);
}
public void write(int b){
byteArrayOutputStream.write(b);
}
};
final PrintWriter printWriter = new PrintWriter(new OutputStreamWriter(byteArrayOutputStream));//把转换字节流转换成字符流
HttpServletResponse httpServletResponse = new HttpServletResponseWrapper(response){//用于从response获取结果流资源(重写了两个方法)
public ServletOutputStream getOutputStream(){
return servletOuputStream;
}
public PrintWriter getWriter(){
return printWriter;
}
};
rd.include(request, httpServletResponse);//发送结果流
printWriter.flush();//刷新缓冲区,把缓冲区的数据输出
FileOutputStream fileOutputStream = new FileOutputStream(fileFullPath);
byteArrayOutputStream.writeTo(fileOutputStream);//把byteArrayOuputStream中的资源全部写入到fileOuputStream中
fileOutputStream.close();//关闭输出流,并释放相关资源
response.sendRedirect(fileName);//发送指定文件流到客户端
}
}

Three: High concurrency and high load classes The website focuses on caching, load balancing, and storage

Caching is another big issue. I usually use memcached as a cache cluster. Generally speaking, it is enough to deploy about 10 units (10g memory pool). One thing to note is that you must not use

swap. It is best to turn off Linux swap.

Load Balancing/Acceleration

可能上面说缓存的时候,有人第一想的是页面静态化,所谓的静态html,我认为这是常识,不属于要点了。页面的静态化随之带来的是静态服务的

负载均衡和加速。我认为Lighttped+Squid是最好的方式了。

LVS b3c39e73f82b358bad409308ba9884dclighttped====>squid(s) ====lighttpd

上面是我经常用的。注意,我没有用apache,除非特定的需求,否则我不部署apache,因为我一般用php-fastcgi配合lighttpd,
性能比apache+mod_php要强很多。

squid的使用可以解决文件的同步等等问题,但是需要注意,你要很好的监控缓存的命中率,尽可能的提高的90%以上。
squid和lighttped也有很多的话题要讨论,这里不赘述。

存储

存储也是一个大问题,一种是小文件的存储,比如图片这类。另一种是大文件的存储,比如搜索引擎的索引,一般单文件都超过2g以上。

小文件的存储最简单的方法是结合lighttpd来进行分布。或者干脆使用Redhat的GFS,优点是应用透明,缺点是费用较高。我是指你购买盘阵的问题。我的项目中,存储量是2-10Tb,我采用了分布式存储。这里要解决文件的复制和冗余。这样每个文件有不同的冗余,这方面可以参考google的gfs的论文。

大文件的存储,可以参考nutch的方案,现在已经独立为Hadoop子项目。(你可以google it)

其他:此外,passport等也是考虑的,不过都属于比较简单的了。

四:高并发高负载网站的系统架构之图片服务器分离

大家知道,对于Web 服务器来说,不管是Apache、IIS还是其他容器,图片是最消耗资源的,于是我们有必要将图片与页面进行分离,这是基本上大型网站都会采用的策略,他 们都有独立的图片服务器,甚至很多台图片服务器。这样的架构可以降低提供页面访问请求的服务器系统压力,并且可以保证系统不会因为图片问题而崩溃,在应用 服务器和图片服务器上,可以进行不同的配置优化,比如apache在配置ContentType的时候可以尽量少支持,尽可能少的LoadModule, 保证更高的系统消耗和执行效率。

利用Apache实现图片服务器的分离

缘由:

起步阶段的应用,都可能部署在一台服务器上(费用上的原因),第一个优先分离的,肯定是数据库和应用服务器。第二个分离的,会是什么呢?各有各的考虑,我所在的项目组重点考虑的节约带宽,服务器性能再好,带宽再高,并发来了,也容易撑不住。因此,我这篇文章的重点在这里。这里重点是介绍实践,不一定符合所有情况,供看者参考吧,环境介绍:

WEB应用服务器:4CPU双核2G, 内存4G

部署:Win2003/Apache Http Server 2.1/Tomcat6

数据库服务器:4CPU双核2G, 内存4G

部署:Win2003/MSSQL2000

步骤:

步骤一:增加2台配置为:2CPU双核2G,内存2G普通服务器,做资源服务器

部署:Tomcat6,跑了一个图片上传的简单应用,(记得指定web.xml的),并指定域名为res1.***.com,res2.***.com,采用

ajp协议

步骤二:修改Apache httpd.conf配置

原来应用的文件上传功能网址为:

1、/fileupload.html

2、/otherupload.html

httpd.conf中增加如下配置


ServerAdmin webmaster@***.com
ProxyPass /fileupload.html balancer://rescluster/fileupload lbmethod=byrequests stickysession=JSESSIONID nofailover=Off timeout=5 maxattempts=3
ProxyPass /otherupload.html balancer://rescluster/otherupload.html lbmethod=byrequests stickysession=JSESSIONID nofailover=Off timeout=5 maxattempts=3
#
BalancerMember ajp://res1.***.com:8009 smax=5 max=500 ttl=120 retry=300 loadfactor=100 route=tomcat1
BalancerMember ajp://res2.***.com:8009 smax=5 max=500 ttl=120 retry=300 loadfactor=100 route=tomcat2
< /VirtualHost>

步骤三,修改业务逻辑:

所有上传文件在数据库中均采用全url的方式保存,例如产品图片路径存成:http://res1.***.com/upload/20090101/product120302005.jpg

现在,你可以高枕无忧了,带宽不够时,增加个几十台图片服务器,只需要稍微修改一下apache的配置文件即可。

五:高并发高负载网站的系统架构之数据库集群和库表散列

大型网站都有复杂的应用,这些应用必须使用数据库,那么在面对大量访问的时候,数据库的瓶颈很快就能显现出来,这时一台数据库将很快无法满足应用,于是我们需要使用数据库集群或者库表散列。

  在数据库集群方面,很多数据库都有自己的解决方案,Oracle、Sybase等都有很好的方案,常用的MySQL提供的Master/Slave也是类似的方案,您使用了什么样的DB,就参考相应的解决方案来实施即可。

The database cluster mentioned above will be limited by the DB type in terms of architecture, cost, and scalability, so we need to consider improving the system architecture from the perspective of the application. Library table hashing is commonly used and is the most effective. s solution. We install business and application or functional modules in the application to separate the database. Different modules correspond to different databases or tables, and then use certain strategies to perform smaller database hashes on a certain page or function, such as the user table. Hash the table according to user ID, which can improve the performance of the system at low cost and have good scalability. Sohu's forum adopts such a structure, which separates the forum's users, settings, posts and other information into a database, and then hashes the posts and users into databases and tables according to sections and IDs. Finally, it can be simply configured in the configuration file. This allows the system to add a low-cost database at any time to supplement system performance.

Classification of cluster software:

Generally speaking, cluster software is divided into three major categories based on the direction of focus and the problems it is trying to solve. : High performance cluster (HPC), load balance cluster (LBC), high availability cluster (HAC).

High performance cluster (HPC), which uses multiple machines in a cluster to complete the same task, makes the task completion speed and reliability much higher than that of a single machine Operational effect. It makes up for the shortcomings in stand-alone performance. This cluster is widely used in environments with large amounts of data and complex calculations such as weather forecasting and environmental monitoring;

Load balance cluster (LBC) uses multiple clusters in a cluster. A single machine can complete many small parallel tasks. Generally speaking, if there are more people using an application, the response time for user requests will increase, and the performance of the machine will also be affected. If a load balancing cluster is used, any machine in the cluster can respond to the user's requests. request, so that after the user issues a service request, the cluster will select the machine with the smallest load and the best service to accept the request and respond. In this way, the cluster can be used to increase the availability and stability of the system. This type of cluster is commonly used in websites;

High availability cluster (HAC), which uses the redundancy of the system in the cluster, when a machine in the system is damaged , other backup machines can quickly take over to start the service, waiting for the repair and return of the faulty machine. Maximize the availability of services in the cluster. This type of system is generally widely used in fields such as banks and telecommunications services that have high requirements for system reliability.

2 Current status of database cluster

Database cluster is implemented by introducing computer cluster technology into the database. Although each manufacturer claims that its No matter how perfect the architecture is, it can never change the fact that Oracle takes the lead and everyone is chasing it. Oracle RAC is still ahead of other database vendors including Microsoft in terms of cluster solutions. It can meet customers' high availability, high performance, and database load balancing. and the need for easy expansion.

Oracle's Real Application Cluster (RAC)

Microsoft SQL Cluster Server (MSCS)

IBM's DB2 UDB High Availability Cluster(UDB)

Sybase ASE High Availability Cluster (ASE)

MySQL High Availability Cluster (MySQL CS)

based on IO Three-party HA (high availability) cluster

The current main database cluster technologies include the above six categories, some are developed by database manufacturers themselves; some are developed by third-party cluster companies; and there are databases Developed by manufacturers in cooperation with third-party cluster companies, the functions and architectures implemented by various clusters are also different.

RAC (Real Application Cluster, real application cluster) is a new technology used in Oracle9i database, and it is also the core technology of Oracle database to support grid computing environment. Its emergence solves an important problem faced in traditional database applications: the contradiction between high performance, high scalability and low price. For a long time, Oracle has dominated the cluster database market with its Real Application Cluster (RAC) technology

Six: System architecture of high-concurrency and high-load websites Cache

Anyone who is involved in technology has come across the word cache, and cache is used in many places. Caching in website architecture and website development is also very important. Here we first talk about the two most basic caches. Advanced and distributed caching are described later.

Regarding caching in architecture, anyone familiar with Apache will know that Apache provides its own caching module, and you can also use the additional Squid module for caching. Both methods can effectively improve Apache's access responsiveness.

Cache in website program development, Memory Cache provided on Linux is a commonly used cache interface, which can be used in web development. For example, when developing with Java, you can call MemoryCache to cache and communicate with some data. Some large-scale The community uses this architecture. In addition, when using web language development, each language basically has its own cache module and method. PHP has Pear's Cache module, and Java has more. I am not very familiar with .net, but I believe it must be there.

Java Open Source Cache Framework

JBossCache/TreeCache JBossCache is a replicated transaction cache that allows you to cache enterprise application data to update Nice improved performance. Cache data is automatically replicated, allowing you to easily perform cluster work between Jboss servers. JBossCache can run an MBean service through Jboss Application Service or other J2EE containers. Of course, it can also run independently. JBossCache includes two modules: TreeCache and TreeCacheAOP. TreeCache -- is a tree-structured replicated transaction cache. TreeCacheAOP -- is an "object-oriented" cache that uses AOP to dynamically manage POJO

OSCache The OSCache tag library is designed by OpenSymphony. It is a groundbreaking JSP custom tag application that provides Implement the function of fast memory buffering within existing JSP pages. OSCache is a widely adopted high-performance J2EE caching framework. OSCache can be used as a common caching solution for any Java application. OSCache has the following characteristics: cache any object, you can cache parts of jsp pages or HTTP requests without restrictions, and any java object can be cached. Has a Comprehensive API - The OSCache API gives you a comprehensive program to control all OSCache features. Persistent cache - The cache can be written to disk at will, thus allowing expensive-to-create data to remain cached, even across application restarts. Supports clustering - cluster cache data can be configured individually without modifying the code. Cache record expiration - You have maximum control over the expiration of cached objects, including pluggable refresh strategies (if not required for default performance).

JCACHE JCACHE is an upcoming standard specification (JSR 107) that describes a method for temporarily caching Java objects in memory, including object creation, shared access, and spooling spooling, failure, consistency of each JVM, etc. It can be used to cache the most frequently read data within JSPs, such as product catalogs and price lists. With JCACHE, response times for most queries are accelerated by having cached data (internal testing shows response times are approximately 15 times faster).
Ehcache Ehcache comes from hibernate and is used in Hibernate as a data caching solution.

Java Caching System JCS is a sub-project of Jakarta's project Turbine. It is a composite buffer tool. Objects can be buffered to memory and hard disk. Has buffer object time expiration setting. You can also build a distributed architecture with buffering through JCS to achieve high-performance applications. For some objects that need to be accessed frequently and consume a lot of resources every time they are accessed, they can be temporarily stored in the buffer, which can improve the performance of the service. And JCS is a good buffering tool. Buffering tools can significantly improve performance for applications where there are far more read operations than write operations.

SwarmCache SwarmCache is a simple yet powerful distributed caching mechanism. It uses IP multicast to efficiently communicate between cached instances. It is ideal for quickly improving the performance of clustered web applications.

ShiftOne ShiftOne Object Cache is a Java library that provides basic object caching capabilities. The strategies implemented are first-in-first-out (FIFO), recently used (LRU), and least frequently used (LFU). All strategies maximize the size of an element and maximize its survival time.

WhirlyCache Whirlycache is a fast, configurable cache of objects that exist in memory. It can speed up a website or application by caching objects that would otherwise have to be built by querying a database or other costly processes.

Jofti Jofti can index and search objects in the cache layer (supports EHCache, JBossCache and OSCache) or in storage structures that support the Map interface. The framework also provides transparency for the addition, deletion, and modification of objects in the index as well as easy-to-use query capabilities for search.
cache4j cache4j is a Java object cache with a simple API and fast implementation. Its features include: caching in memory, designed for multi-threaded environments, two implementations: synchronization and blocking, multiple cache clearing strategies: LFU, LRU, FIFO, the use of strong references (strong reference) and soft references ( soft reference) storage object.

Open Terracotta is a JVM-level open source cluster framework that provides: HTTP Session replication, distributed caching, POJO clustering, and JVMs across the cluster to achieve distributed application coordination (using code injection, So you don't need to modify anything).

sccache The object caching system used by SHOP.COM. sccache is an in-process cache and a second-level, shared cache. It stores cached objects on disk. Supports associated keys, keys of any size and data of any size. Ability to automatically perform garbage collection.

Shoal Shoal is a Java-based scalable dynamic cluster framework that provides infrastructure support for building fault-tolerant, reliable and available Java applications. This framework can also be integrated into any Java product that does not wish to be tied to a specific communication protocol, but requires cluster and distributed systems support. Shoal is the clustering engine for GlassFish and JonAS application servers.

Simple-spring-Memcached Simple-Spring-Memcached encapsulates calls to MemCached, making MemCached client development extremely simple.

Summarize

The above is the detailed content of Solving high concurrency problems in Java systems. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn