Home  >  Article  >  Java  >  Examples of high-concurrency solutions and high-load optimization in Java

Examples of high-concurrency solutions and high-load optimization in Java

黄舟
黄舟Original
2017-07-27 10:51:141264browse

A small website, such as a personal website, can be implemented using the simplest html static page, with some pictures to achieve a beautifying effect. All pages are stored in a directory. Such a website has a negative impact on the system architecture and performance. The requirements are very simple. With the continuous enrichment of Internet businesses, website-related technologies have been subdivided into very fine aspects after years of development. Especially for large websites, the technologies used are very wide-ranging. From hardware to software, programming languages, databases, WebServers, firewalls and other fields, there are very high requirements, which are no longer comparable to the original simple HTML static website.

Large websites, such as portals. In the face of a large number of user access and high concurrent requests, the basic solutions focus on the following links: using high-performance servers, high-performance databases, high-efficiency programming languages, and high-performance Web containers. But apart from these aspects, there is no fundamental solution to the high load and high concurrency problems faced by large websites.

The several solution ideas provided above also mean greater investment to a certain extent, and such solution ideas have bottlenecks and do not have good scalability. Below I will start with low cost, high performance and high performance. Let me talk about some of my experiences from an expansion perspective.

1. HTML static

In fact, everyone knows that the most efficient and least consumed is the purely static HTML page, so we try our best to make our website The pages on are implemented using static pages. This simplest method is actually the most effective method. However, for websites with a large amount of content and frequent updates, we cannot implement them all manually one by one, so our common information publishing system CMS appeared, such as the news channels of various portal sites we often visit, and even their other channels, all through It is managed and implemented by the information release system. The information release system can realize the simplest information entry and automatically generate static pages. It can also have functions such as channel management, permission management, automatic crawling, etc. For a large website, it has an efficient set of , a manageable CMS is essential.

In addition to portals and information publishing type websites, for community type websites with high interactivity requirements, being as static as possible is also a necessary means to improve performance. Posts and articles in the community can be processed in real time. Staticization, and then re-staticizing when there is an update, is also a widely used strategy. Mop's hodgepodge uses this strategy, as does the NetEase community and so on.

At the same time, html staticization is also a means used by some caching strategies. For applications in the system that frequently use database queries but have very small content updates, you can consider using html staticization, such as public forums in forums. Setting information, this information can be managed by the current mainstream forums and stored in the database. In fact, a large number of this information is called by the front-end program, but the update frequency is very small. You can consider making this part of the content static when updating the background. This avoids a large number of database access requests.

2. Image server separation

As we all know, for web servers, whether it is Apache, IIS or other containers, images consume the most resources, so we It is necessary to separate images from pages. This is a strategy basically adopted by large websites. They all have independent image servers, or even many image servers. Such an architecture can reduce the pressure on the server system that provides page access requests, and can ensure that the system will not crash due to picture problems. Different configuration optimizations can be performed on the application server and picture server. For example, apache can try its best when configuring ContentType. Support as little as possible and use as few LoadModules as possible to ensure higher system consumption and execution efficiency.

3. Database cluster and database table hashing

Large websites have complex applications, and these applications must use databases. When faced with a large number of accesses, The bottleneck of the database will soon appear. At this time, one database will soon be unable to meet the application, so we need to use database clustering or database table hashing.

In terms of database clusters, many databases have their own solutions. Oracle, Sybase, etc. have good solutions. The commonly used Master/Slave provided by MySQL is also a similar solution. What kind of solutions did you use? DB, just refer to the corresponding solution to implement it.

The database cluster mentioned above will be limited by the type of DB used in terms of architecture, cost, and scalability, so we need to consider improving the system architecture from the perspective of the application. Library table hashing is commonly used and the most effective. s solution. We install business and application or functional modules in the application to separate the database. Different modules correspond to different databases or tables, and then use certain strategies to perform smaller database hashes on a certain page or function, such as the user table. Hash the table according to user ID, which can improve the performance of the system at low cost and have good scalability. Sohu's forum adopts such a structure, which separates the forum's users, settings, posts and other information into a database, and then hashes the posts and users into databases and tables according to sections and IDs. Finally, it can be simply configured in the configuration file. This allows the system to add a low-cost database at any time to supplement system performance.

4. Caching

Anyone who is involved in technology has been exposed to the word cache, and cache is used in many places. Caching in website architecture and website development is also very important. Here we first talk about the two most basic caches. Advanced and distributed caching are described later.
Architecture caching, anyone familiar with Apache will know that Apache provides its own caching module, and you can also use the additional Squid module for caching. Both methods can effectively improve Apache's access response capabilities.
Caching in website program development, Memory Cache provided on Linux is a commonly used cache interface, which can be used in web development. For example, when developing with Java, you can call MemoryCache to cache and communicate with some data. Some large-scale The community uses this architecture. In addition, when using web language development, each language basically has its own cache module and method. PHP has Pear's Cache module, and Java has even more. I am not very familiar with .net, but I believe it must be there.

5. Mirroring

Mirroring is a method often used by large websites to improve performance and data security. Mirroring technology can solve the problem of different network access providers and geographical zones. The difference in user access speeds, such as the difference between ChinaNet and EduNet, has prompted many websites to build mirror sites within the education network, and the data is updated regularly or in real time. In terms of the detailed technology of mirroring, I will not go into too much detail here. There are many professional off-the-shelf solution architectures and products to choose from. There are also cheap ways to implement it through software, such as rsync and other tools on Linux.

6. Load balancing

Load balancing will be the ultimate solution for large websites to solve high-load access and a large number of concurrent requests.

Load balancing technology has been developed for many years, and there are many professional service providers and products to choose from. I have personally come across some solutions, and two of them can be used as a reference.

1) Hardware four-layer switching

The fourth layer switching uses the header information of the third and fourth layer information packets to identify the business flow according to the application interval, and combine the business flow of the entire interval segment Assigned to the appropriate application server for processing. The layer 4 switching function is like a virtual IP, pointing to the physical server. The services it transmits obey a variety of protocols, including HTTP, FTP, NFS, Telnet or other protocols. These services require complex load balancing algorithms based on physical servers. In the IP world, the service type is determined by the terminal TCP or UDP port address. In Layer 4 switching, the application range is determined by the source and terminal IP addresses, TCP and UDP ports.

In the field of hardware four-layer switching products, there are some well-known products to choose from, such as Alteon, F5, etc. These products are expensive, but they are worth the money and can provide very excellent performance and very flexible management. ability. Yahoo China used three or four Alteons to handle nearly 2,000 servers.

2) Software four-layer switching

After everyone knows the principle of hardware four-layer switch, software four-layer switching based on the OSI model has emerged. Such a solution has been implemented The principle is the same, but the performance is slightly worse. However, it is still easy to meet a certain amount of pressure. Some people say that the software implementation method is actually more flexible, and the processing power depends entirely on the familiarity of your configuration.

We can use the commonly used LVS on Linux to solve the four-layer switching of software. LVS is the Linux Virtual Server. It provides a real-time disaster response solution based on the heartbeat line, which improves the robustness of the system and can simultaneously It provides flexible virtual VIP configuration and management functions that can meet multiple application requirements at the same time, which is essential for distributed systems.

A typical load balancing strategy is to build a Squid cluster based on software or hardware four-layer switching. This idea is adopted by many large websites, including search engines. This architecture is low-cost, High performance and strong scalability, it is very easy to add or remove nodes to the architecture at any time. I am going to sort out this structure in detail and discuss it with you.

Java design method for handling databases in high-concurrency and high-load websites (java tutorial, java processing large amounts of data, java high-load data)

1: Databases that focus on high-concurrency and high-load websites

Yes, the first is the database, which is the first SPOF faced by most applications. Especially for Web2.0 applications, the response of the database must be solved first.
Generally speaking, MySQL is the most commonly used. It may initially be a mysql host. When the data increases to more than 1 million, then the performance of MySQL drops sharply. A commonly used optimization measure is M-S (master-slave) mode for synchronous replication, where queries and operations are performed on different servers. What I recommend is the M-M-Slaves method, 2 master Mysql, multiple Slaves. It should be noted that although there are 2 Masters, only one is Active at the same time. We can switch at a certain time. The reason for using two M is to ensure that M will not become the SPOF of the system again.
Slaves can be further load balanced and can be combined with LVS to appropriately balance select operations to different slaves.
The above architecture can cope with a certain amount of load, but as the number of users further increases, your user table data exceeds 10 million, and then the M becomes SPOF. You cannot expand Slaves arbitrarily, otherwise the cost of replication synchronization will skyrocket. What should I do? My method is table partitioning, partitioning from a business level. The simplest, take user data as an example. According to a certain segmentation method, such as ID, it is segmented into different database clusters.

The global database is used for meta data query. The disadvantage is that each time you query, it will be added once. For example, if you want to query a user nightsailer, you must first go to the global database group to find the cluster id corresponding to nightsailer, and then go to the specified cluster to find the actual data of nightsailer.
Each cluster can use m-m mode or m-m-slaves mode. This is a scalable structure. As the load increases, you can simply add new mysql clusters.

It should be noted that:
1. Disable all auto_increment fields
2. IDs need to be centrally allocated using a common algorithm
3. There must be a better method to monitor the mysql host The operational status of workloads and services. If you have more than 30 mysql databases running, you will understand what I mean.
4. Do not use persistent links (do not use pconnect). Instead, use a third-party database connection pool such as sqlrelay, or simply do it yourself, because the mysql connection pool in php4 often has problems.

2: HTML static system architecture for high-concurrency and high-load websites

In fact, everyone knows that the most efficient and least expensive method is pure static http:/ /www.ablanxue.com/shtml/201207/776.shtml html page, so we try our best to use static pages for the pages on our website. This simplest method is actually the most effective method. However, for websites with a large amount of content and frequent updates, we cannot implement them all manually one by one, so our common information publishing system CMS appeared, such as the news channels of various portal sites we often visit, and even their other channels, all through It is managed and implemented by the information release system. The information release system can realize the simplest information entry and automatically generate static pages. It can also have functions such as channel management, permission management, automatic crawling, etc. For a large website, it has an efficient set of , a manageable CMS is essential.
 
In addition to portals and information publishing type websites, for community type websites with high interactivity requirements, being as static as possible is also a necessary means to improve performance. Posts and articles in the community can be processed in real time. Staticization, and then re-staticizing when there is an update, is also a widely used strategy. Mop's hodgepodge uses this strategy, as does the NetEase community and so on.
 
At the same time, html staticization is also a means used by some caching strategies. For applications in the system that frequently use database queries but have very small content updates, you can consider using html staticization, such as public forums in forums. Setting information, this information can be managed by the current mainstream forums and stored in the database. In fact, a large number of this information is called by the front-end program, but the update frequency is very small. You can consider making this part of the content static when updating the background. This avoids high concurrency of a large number of database access requests.
 

Website HTML static solution
When a Servlet resource request reaches the WEB server, we will fill in the specified JSP page to respond to the request:

HTTP request---Web Server---Servlet--Business logic processing--Access data--Fill JSP--Response to request

After HTML staticization:

HTTP request---Web server---Servlet --HTML--Response request

Static access request is as follows

Servlet:


public void doGet(HttpServletRequest request, HttpServletResponse response)  
        throws ServletException, IOException {  
    if(request.getParameter("chapterId") != null){  
        String chapterFileName = "bookChapterRead_"+request.getParameter("chapterId")+".html";  
        String chapterFilePath = getServletContext().getRealPath("/") + chapterFileName;  
        File chapterFile = new File(chapterFilePath);  
        if(chapterFile.exists()){response.sendRedirect(chapterFileName);return;}//如果有这个文件就告诉浏览器转向   
        INovelChapterBiz novelChapterBiz = new NovelChapterBizImpl();  
        NovelChapter novelChapter = novelChapterBiz.searchNovelChapterById(Integer.parseInt(request.getParameter("chapterId")));//章节信息   
        int lastPageId = novelChapterBiz.searchLastCHapterId(novelChapter.getNovelId().getId(), novelChapter.getId());  
        int nextPageId = novelChapterBiz.searchNextChapterId(novelChapter.getNovelId().getId(), novelChapter.getId());  
        request.setAttribute("novelChapter", novelChapter);  
        request.setAttribute("lastPageId", lastPageId);  
        request.setAttribute("nextPageId", nextPageId);  
        new CreateStaticHTMLPage().createStaticHTMLPage(request, response, getServletContext(),   
                chapterFileName, chapterFilePath, "/bookRead.jsp");  
    }  
}

Class to generate HTML static page:


package com.jb.y2t034.thefifth.web.servlet;  
import java.io.ByteArrayOutputStream;  
import java.io.FileOutputStream;  
import java.io.IOException;  
import java.io.OutputStreamWriter;  
import java.io.PrintWriter;  
import javax.servlet.RequestDispatcher;  
import javax.servlet.ServletContext;  
import javax.servlet.ServletException;  
import javax.servlet.ServletOutputStream;  
import javax.servlet.http.HttpServletRequest;  
import javax.servlet.http.HttpServletResponse;  
import javax.servlet.http.HttpServletResponseWrapper;  
/** * 创建HTML静态页面 
* 功能:创建HTML静态页面 
* 时间:2009年1011日 
* 地点:home 
* @author mavk 
* 
*/  public class CreateStaticHTMLPage {  
    /** 
     * 生成静态HTML页面的方法 
     * @param request 请求对象 
     * @param response 响应对象 
     * @param servletContext Servlet上下文 
     * @param fileName 文件名称 
     * @param fileFullPath 文件完整路径 
     * @param jspPath 需要生成静态文件的JSP路径(相对即可) 
     * @throws IOException 
     * @throws ServletException 
     */  
    public void createStaticHTMLPage(HttpServletRequest request, HttpServletResponse response,ServletContext servletContext,String fileName,String fileFullPath,String jspPath) throws ServletException, IOException{  
        response.setContentType("text/html;charset=gb2312");//设置HTML结果流编码(即HTML文件编码)   
        RequestDispatcher rd = servletContext.getRequestDispatcher(jspPath);//得到JSP资源   
        final ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();//用于从ServletOutputStream中接收资源   
        final ServletOutputStream servletOuputStream = new ServletOutputStream(){//用于从HttpServletResponse中接收资源   
            public void write(byte[] b, int off,int len){  
                byteArrayOutputStream.write(b, off, len);  
            }  
            public void write(int b){  
                byteArrayOutputStream.write(b);  
            }  
        };  
        final PrintWriter printWriter = new PrintWriter(new OutputStreamWriter(byteArrayOutputStream));//把转换字节流转换成字符流   
        HttpServletResponse httpServletResponse = new HttpServletResponseWrapper(response){//用于从response获取结果流资源(重写了两个方法)   
            public ServletOutputStream getOutputStream(){  
                return servletOuputStream;  
            }  
            public PrintWriter getWriter(){  
                return printWriter;  
            }  
        };  
        rd.include(request, httpServletResponse);//发送结果流   
        printWriter.flush();//刷新缓冲区,把缓冲区的数据输出   
        FileOutputStream fileOutputStream = new FileOutputStream(fileFullPath);  
        byteArrayOutputStream.writeTo(fileOutputStream);//把byteArrayOuputStream中的资源全部写入到fileOuputStream中   
        fileOutputStream.close();//关闭输出流,并释放相关资源   
        response.sendRedirect(fileName);//发送指定文件流到客户端       }  
}

Three: High-concurrency and high-load websites focus on caching, load balancing, and storage

Cache is another big issue Question, I usually use memcached as a cache cluster. Generally speaking, it is enough to deploy about 10 units (10g memory pool). One thing to note is that you must not use
swap. It is best to turn off Linux swap.

负载均衡/加速

可能上面说缓存的时候,有人第一想的是页面静态化,所谓的静态html,我认为这是常识,不属于要点了。页面的静态化随之带来的是静态服务的
负载均衡和加速。我认为Lighttped+Squid是最好的方式了。
LVS b3c39e73f82b358bad409308ba9884dclighttped====>squid(s) ====lighttpd

上面是我经常用的。注意,我没有用apache,除非特定的需求,否则我不部署apache,因为我一般用php-fastcgi配合lighttpd,
性能比apache+mod_php要强很多。

squid的使用可以解决文件的同步等等问题,但是需要注意,你要很好的监控缓存的命中率,尽可能的提高的90%以上。
squid和lighttped也有很多的话题要讨论,这里不赘述。

存储
存储也是一个大问题,一种是小文件的存储,比如图片这类。另一种是大文件的存储,比如搜索引擎的索引,一般单文件都超过2g以上。
小文件的存储最简单的方法是结合lighttpd来进行分布。或者干脆使用Redhat的GFS,优点是应用透明,缺点是费用较高。我是指
你购买盘阵的问题。我的项目中,存储量是2-10Tb,我采用了分布式存储。这里要解决文件的复制和冗余。
这样每个文件有不同的冗余,这方面可以参考google的gfs的论文。
大文件的存储,可以参考nutch的方案,现在已经独立为hadoop子项目。(你可以google it)

其他:
此外,passport等也是考虑的,不过都属于比较简单的了。
四:高并发高负载网站的系统架构之图片服务器分离 
大家知道,对于Web 服务器来说,不管是Apache、IIS还是其他容器,图片是最消耗资源的,于是我们有必要将图片与页面进行分离,这是基本上大型网站都会采用的策略,他 们都有独立的图片服务器,甚至很多台图片服务器。这样的架构可以降低提供页面访问请求的服务器系统压力,并且可以保证系统不会因为图片问题而崩溃,在应用 服务器和图片服务器上,可以进行不同的配置优化,比如apache在配置ContentType的时候可以尽量少支持,尽可能少的LoadModule, 保证更高的系统消耗和执行效率。


利用Apache实现图片服务器的分离
缘由: 
起步阶段的应用,都可能部署在一台服务器上(费用上的原因) 
第一个优先分离的,肯定是数据库和应用服务器。 
第二个分离的,会是什么呢?各有各的考虑,我所在的项目组重点考虑的节约带宽,服务器性能再好,带宽再高,并发来了,也容易撑不住。因此,我这篇文章的重点在这里。这里重点是介绍实践,不一定符合所有情况,供看者参考吧, 
环境介绍: 
WEB应用服务器:4CPU双核2G, 内存4G 
  部署:Win2003/Apache Http Server 2.1/Tomcat6 
数据库服务器:4CPU双核2G, 内存4G 
  部署:Win2003/MSSQL2000 
步骤: 
步骤一:增加2台配置为:2CPU双核2G,内存2G普通服务器,做资源服务器 
  部署:Tomcat6,跑了一个图片上传的简单应用,(记得指定web.xml的682fa04e96c314670032152167ed6ade),并指定域名为res1.***.com,res2.***.com,采用ajp协议 
步骤二:修改Apache httpd.conf配置 
  原来应用的文件上传功能网址为: 
   1、/fileupload.html 
   2、/otherupload.html 
  在httpd.conf中增加如下配置

<VirtualHost *:80>   
  ServerAdmin webmaster@***.com   
  ProxyPass /fileupload.html balancer://rescluster/fileupload lbmethod=byrequests stickysession=JSESSIONID nofailover=Off timeout=5 maxattempts=3      
  ProxyPass /otherupload.html balancer://rescluster/otherupload.html lbmethod=byrequests stickysession=JSESSIONID nofailover=Off timeout=5 maxattempts=3      
  #<!--负载均衡-->   
  <Proxy balancer://rescluster/>   
    BalancerMember ajp://res1.***.com:8009 smax=5 max=500 ttl=120 retry=300 loadfactor=100 route=tomcat1  
    BalancerMember ajp://res2.***.com:8009 smax=5 max=500 ttl=120 retry=300 loadfactor=100 route=tomcat2  
  </Proxy>   
   
< /VirtualHost>

步骤三,修改业务逻辑: 
  所有上传文件在数据库中均采用全url的方式保存,例如产品图片路径存成:http://res1.***.com/upload/20090101/product120302005.jpg

现在,你可以高枕无忧了,带宽不够时,增加个几十台图片服务器,只需要稍微修改一下apache的配置文件,即可。

五:高并发高负载网站的系统架构之数据库集群和库表散列

大型网站都有复杂的应用,这些应用必须使用数据库,那么在面对大量访问的时候,数据库的瓶颈很快就能显现出来,这时一台数据库将很快无法满足应用,于是我们需要使用数据库集群或者库表散列。
  
  在数据库集群方面,很多数据库都有自己的解决方案,Oracle、Sybase等都有很好的方案,常用的MySQL提供的Master/Slave也是类似的方案,您使用了什么样的DB,就参考相应的解决方案来实施即可。
  
   上面提到的数据库集群由于在架构、成本、扩张性方面都会受到所采用DB类型的限制,于是我们需要从应用程序的角度来考虑改善系统架构,库表散列是常用并 且最有效的解决方案。我们在应用程序中安装业务和应用或者功能模块将数据库进行分离,不同的模块对应不同的数据库或者表,再按照一定的策略对某个页面或者 功能进行更小的数据库散列,比如用户表,按照用户ID进行表散列,这样就能够低成本的提升系统的性能并且有很好的扩展性。sohu的论坛就是采用了这样的 架构,将论坛的用户、设置、帖子等信息进行数据库分离,然后对帖子、用户按照板块和ID进行散列数据库和表,最终可以在配置文件中进行简单的配置便能让系 统随时增加一台低成本的数据库进来补充系统性能。


集群软件的分类:
一般来讲,集群软件根据侧重的方向和试图解决的问题,分为三大类:高性能集群(High performance cluster,HPC)、负载均衡集群(Load balance cluster, LBC),高可用性集群(High availability cluster,HAC)。
高性能集群(High performance cluster,HPC),它是利用一个集群中的多台机器共同完成同一件任务,使得完成任务的速度和可靠性都远远高于单机运行的效果。弥补了单机性能上的不足。该集群在天气预报、环境监控等数据量大,计算复杂的环境中应用比较多;
负载均衡集群(Load balance cluster, LBC),它是利用一个集群中的多台单机,完成许多并行的小的工作。一般情况下,如果一个应用使用的人多了,那么用户请求的响应时间就会增大,机器的性能也会受到影响,如果使用负载均衡集群,那么集群中任意一台机器都能响应用户的请求,这样集群就会在用户发出服务请求之后,选择当时负载最小,能够提供最好的服务的这台机器来接受请求并相应,这样就可用用集群来增加系统的可用性和稳定性。这类集群在网站中使用较多;
高可用性集群(High availability cluster,HAC),它是利用集群中系统 的冗余,当系统中某台机器发生损坏的时候,其他后备的机器可以迅速的接替它来启动服务,等待故障机的维修和返回。最大限度的保证集群中服务的可用性。这类系统一般在银行,电信服务这类对系统可靠性有高的要求的领域有着广泛的应用。
2 数据库集群的现状
数据库集群是将计算机集群技术引入到数据库中来实现的,尽管各厂商宣称自己的架构如何的完美,但是始终不能改变Oracle当先,大家追逐的事实,在集群的解决方案上Oracle RAC还是领先于包括微软在内的其它数据库厂商,它能满足客户高可用性、高性能、数据库负载均衡和方便扩展的需求。

Oracle’s Real Application Cluster (RAC)
Microsoft SQL Cluster Server (MSCS)
IBM’s DB2 UDB High Availability Cluster(UDB)
Sybase ASE High Availability Cluster (ASE)
MySQL High Availability Cluster (MySQL CS)

基于IO的第三方HA(高可用性)集群
当前主要的数据库集群技术有以上六大类,有数据库厂商自己开发的;也有第三方的集群公司开发的;还有数据库厂商与第三方集群公司合作开发的,各类集群实现的功能及架构也不尽相同。
RAC(Real Application Cluster,真正应用集群)是Oracle9i数据库中采用的一项新技术,也是Oracle数据库支持网格计算环境的核心技术。它的出现解决了传统数据库应用中面临的一个重要问题:高性能、高可伸缩性与低价格之间的矛盾。在很长一段时间里,甲骨文都以其实时应用集群技术(Real Application Cluster,RAC)统治着集群数据库市场

六:高并发高负载网站的系统架构之缓存

Every technical person has come across the word cache, and cache is used in many places. Caching in website architecture and website development is also very important. Here we first talk about the two most basic caches. Advanced and distributed caching are described later.
Regarding architectural caching, anyone familiar with Apache will know that Apache provides its own caching module, or you can use an additional Squid module for caching. Both methods can effectively improve Apache's access response capabilities.
Cache in website program development, Memory Cache provided on Linux is a commonly used cache interface, which can be used in web development. For example, when developing with Java, you can call MemoryCache to cache and communicate with some data. Some large-scale The community uses this architecture. In addition, when using web language development, each language basically has its own cache module and method. PHP has Pear's Cache module, and Java has more. I am not very familiar with .net, but I believe it must be there.

Java Open Source Cache Framework

JBossCache/TreeCache JBossCache is a replicated transaction cache that allows you to cache enterprise-level application data to better improve performance. Cache data is automatically replicated, allowing you to easily perform cluster work between Jboss servers. JBossCache can run an MBean service through Jboss Application Service or other J2EE containers. Of course, it can also run independently. JBossCache includes two modules: TreeCache and TreeCacheAOP. TreeCache -- is a tree-structured replicated transaction cache. TreeCacheAOP -- is an "object-oriented" cache that uses AOP to dynamically manage POJOs.

OSCache OSCacheThe tag library is designed by OpenSymphony, which is a groundbreaking JSP custom tag application , provides the function of implementing fast memory buffering within existing JSP pages. OSCache is a widely adopted high-performance J2EE caching framework. OSCache can be used as a common caching solution for any Java application. OSCache has the following characteristics: cache any object, you can cache parts of jsp pages or HTTP requests without restrictions, and any java object can be cached. Has a Comprehensive API - The OSCache API gives you a comprehensive program to control all OSCache features. Persistent cache - The cache can be written to disk at will, thus allowing expensive-to-create data to remain cached, even across application restarts. Supports clustering - cluster cache data can be configured individually without modifying the code. Expiration of cached records - You have maximum control over the expiration of cached objects, including pluggable refresh strategies if not required for default performance.

JCACHE JCACHE is an upcoming standard specification (JSR 107) that describes a method of temporarily caching Java objects in memory, including object creation and shared access. , spooling, invalidation, consistency of each JVM, etc. It can be used to cache the most frequently read data within JSPs, such as product catalogs and price lists. With JCACHE, response times for most queries are accelerated by having cached data (internal testing shows response times are approximately 15 times faster).

Ehcache Ehcache comes from Hibernate and is used in Hibernate as a data caching solution.

Java Caching System JCS is a sub-project of Jakarta's project Turbine. It is a composite buffer tool. Objects can be buffered to memory and hard disk. Has buffer object time expiration setting. You can also build a distributed architecture with buffering through JCS to achieve high-performance applications. For some objects that need to be accessed frequently and consume a lot of resources every time they are accessed, they can be temporarily stored in the buffer, which can improve the performance of the service. And JCS is a good buffering tool. Buffering tools can significantly improve performance for applications where there are far more read operations than write operations.

SwarmCache SwarmCache is a simple and powerful distributed caching mechanism. It uses IP multicast to efficiently communicate between cached instances. It is ideal for quickly improving the performance of clustered web applications.

ShiftOne ShiftOne Object CacheThis Java library provides basic object caching capabilities. The strategies implemented are first-in-first-out (FIFO), recently used (LRU), and least frequently used (LFU). All strategies maximize the size of an element and maximize its survival time.

WhirlyCache Whirlycache is a fast, configurable cache of objects that exists in memory. It can speed up a website or application by caching objects that would otherwise have to be built by querying a database or other costly processes.

Jofti Jofti can index and search objects in the cache layer (supports EHCache, JBossCache and OSCache) or in storage structures that support the Map interface. The framework also provides transparency for the addition, deletion, and modification of objects in the index as well as easy-to-use query capabilities for search.

cache4j cache4j is a Java object cache with a simple API and fast implementation. Its features include: caching in memory, designed for multi-threaded environments, two implementations: synchronization and blocking, multiple cache clearing strategies: LFU, LRU, FIFO, the use of strong references (strong reference) and soft references ( soft reference) storage object.

Open Terracotta A JVM-level open source clustering framework that provides: HTTP Session replication, distributed caching, POJO clustering, and JVMs across clusters to achieve distributed application coordination (using code injection way, so you don't need to modify anything).

sccache SHOP.COMThe object caching system used. sccache is an in-process cache and a second-level, shared cache. It stores cached objects on disk. Supports associated keys, keys of any size and data of any size. Ability to automatically perform garbage collection.

Shoal Shoal is a Java-based scalable dynamic cluster framework that provides infrastructure support for building fault-tolerant, reliable and available Java applications. This framework can also be integrated into any Java product that does not wish to be tied to a specific communication protocol, but requires cluster and distributed systems support. Shoal is the clustering engine for GlassFish and JonAS application servers.

Simple-Spring-Memcached Simple-Spring-Memcached, which encapsulates calls to MemCached, making MemCached client development extremely simple.

The above is the detailed content of Examples of high-concurrency solutions and high-load optimization in Java. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn