Home  >  Article  >  Java  >  A brief discussion on the comparison of several commonly used thread pools in Java

A brief discussion on the comparison of several commonly used thread pools in Java

高洛峰
高洛峰Original
2017-01-23 16:15:191223browse

1. Why use a thread pool

Many server applications such as Web servers, database servers, file servers, or mail servers are oriented towards processing a large number of short tasks from some remote source. The request reaches the server in some way, perhaps through a network protocol (such as HTTP, FTP, or POP), through a JMS queue, or perhaps by polling a database. Regardless of how the requests arrive, a common situation in server applications is that the processing time of a single task is very short but the number of requests is huge.

A simple model for building server applications is to create a new thread every time a request arrives, and then serve the request in the new thread. This approach actually works fine for prototyping, but if you try to deploy a server application that runs in this way, the serious shortcomings of this approach become apparent. One of the disadvantages of the thread-per-request approach is that creating a new thread for each request is expensive; the server that creates a new thread for each request spends a lot of time creating and destroying threads. This consumes more time and system resources than is spent processing actual user requests.

In addition to the overhead of creating and destroying threads, active threads also consume system resources. Creating too many threads in a JVM can cause the system to run out of memory or "overswitch" due to excessive memory consumption. To prevent resource starvation, server applications need some way to limit the number of requests they can handle at any given time.

The thread pool provides a solution to the problem of thread life cycle overhead and insufficient resources. By reusing threads for multiple tasks, the overhead of thread creation is spread across multiple tasks. The benefit is that because the thread already exists when the request arrives, the delay caused by thread creation is also inadvertently eliminated. This way, requests can be serviced immediately, making the application more responsive. Moreover, by appropriately adjusting the number of threads in the thread pool, that is, when the number of requests exceeds a certain threshold, any other incoming requests are forced to wait until a thread is obtained to process them, thereby preventing resource shortages.

2. Risks of using thread pools

While thread pools are a powerful mechanism for building multi-threaded applications, using them is not without risks. Applications built with thread pools are susceptible to all the concurrency risks that any other multi-threaded application is susceptible to, such as synchronization errors and deadlocks. It is also susceptible to a few other risks specific to thread pools, such as pool-related deadlocks, Insufficient resources and thread leaks.

2.1 Deadlock

Any multi-threaded application has the risk of deadlock. When each of a group of processes or threads is waiting for an event that can only be caused by another process in the group, we say that the group of processes or threads is deadlocked. The simplest case of deadlock is: Thread A holds an exclusive lock on object X and is waiting for a lock on object Y, while thread B holds an exclusive lock on object Y but is waiting for a lock on object X. Unless there is some way to break the wait on the lock (Java locking does not support this method), the deadlocked thread will wait forever.

While there is a risk of deadlock in any multi-threaded program, thread pools introduce another possibility of deadlock, in which all pool threads are waiting in a queue where execution is blocked A task that is the result of the execution of another task, but this task cannot run because there are no unoccupied threads. This happens when a thread pool is used to implement a simulation involving many interacting objects, the simulated objects can send queries to each other, and these queries are then executed as queued tasks, while the query objects are synchronously waiting for responses.

2.2 Insufficient Resources

One advantage of thread pools is that they generally perform very well relative to other alternative scheduling mechanisms (some of which we have already discussed). But this is only true if the thread pool size is adjusted appropriately. Threads consume a lot of resources including memory and other system resources. In addition to the memory required by the Thread object, each thread requires two execution call stacks, which can be large. In addition, the JVM may create a native thread for each Java thread, and these native threads will consume additional system resources. Finally, although the scheduling overhead of switching between threads is small, if there are many threads, context switching may seriously affect the performance of the program.

If the thread pool is too large, the resources consumed by those threads may seriously affect system performance. Switching between threads will waste time, and using more threads than you actually need can cause resource starvation issues because the pool threads are consuming resources that might be used more efficiently by other tasks. In addition to the resources used by the thread itself, the work done in servicing the request may require other resources, such as JDBC connections, sockets, or files. These are also limited resources, and too many concurrent requests may cause failures, such as being unable to allocate a JDBC connection.

2.3 Concurrency Error

Thread pools and other queuing mechanisms rely on the use of wait() and notify() methods, both of which are difficult to use. If not coded correctly, notifications can be lost, causing the thread to remain idle even though there is work in the queue to process. Extreme care must be taken when using these methods. Instead, it's better to use an existing implementation that's already known to work, such as the util.concurrent package.

2.4 Thread leaks

A serious risk in various types of thread pools is thread leaks, when a thread is removed from the pool to perform a task, and after the task is completed, the thread This happens when the pool is not returned. One situation where thread leaks occur is when a task throws a RuntimeException or an Error. If the pool class does not catch them, the thread will simply exit and the size of the thread pool will be permanently reduced by one. When this happens enough times, the thread pool eventually becomes empty and the system stalls because there are no threads available to handle the task.

Some tasks may wait forever for certain resources or input from the user, and these resources are not guaranteed to become available, and the user may have gone home. Such tasks will stop permanently, and these stopped Tasks can also cause the same problems as thread leaks. If a thread is permanently consumed by such a task, it is effectively removed from the pool. For such tasks, you should either only give them their own thread, or only let them wait for a limited time.

2.5 Request Overload

It is possible to overwhelm the server with just requests. In this scenario, we may not want to queue every incoming request to our work queue, as tasks queued for execution may consume too many system resources and cause resource starvation. What you decide to do in this situation is up to you; in some cases, you can simply abandon the request and rely on a higher-level protocol to retry the request later, or you can respond with a response indicating that the server is temporarily busy. to deny the request.

3. Guidelines for Using Thread Pools Effectively

Thread pools can be an extremely effective way to build server applications as long as you follow a few simple guidelines:

Don’t Queue tasks that are synchronously waiting for the results of other tasks. This can lead to the form of deadlock described above, in which all threads are occupied by tasks that are in turn waiting for the results of queued tasks that cannot be executed because all The threads are very busy.

Be careful when using pooled threads for potentially long operations. If the program must wait for a resource such as I/O to complete, specify the maximum wait time and whether to subsequently invalidate or requeue the task for later execution. Doing so guarantees that some progress will eventually be made by releasing a thread to a task that is likely to complete successfully.

Understand the task. To effectively size the thread pool, you need to understand the tasks that are being queued and what they are doing. Are they CPU-bound? Are they I/O-bound? Your answer will affect how you adjust your application. If you have different task classes with very different characteristics, it might make sense to have multiple work queues for different task classes so that each pool can be tuned accordingly.

4. Thread pool size setting

Adjusting the size of the thread pool is basically to avoid two types of errors: too few threads or too many threads. Fortunately, for most applications, the margin between too much and too little is pretty wide.

Please recall: Using threads in an application has two main advantages, allowing processing to continue despite waiting for slow operations such as I/O, and taking advantage of multiple processors. In applications running under the computational constraints of a machine with N processors, adding additional threads as the number of threads approaches N may improve overall processing power, while adding additional threads as the number of threads exceeds N will have no effect. . In fact, too many threads can even degrade performance because it causes additional context switching overhead.

The optimal size of the thread pool depends on the number of available processors and the nature of the tasks in the work queue. If you have only one work queue on a system with N processors, all of which are computational tasks, you will generally get maximum CPU utilization when the thread pool has N or N+1 threads.

For tasks that may need to wait for I/O to complete (for example, tasks that read HTTP requests from a socket), the pool size needs to exceed the number of available processors, because not all threads Always working. By using profiling, you can estimate the ratio of wait time (WT) to service time (ST) for a typical request. If we call this ratio WT/ST, then for a system with N processors, approximately N*(1+WT/ST) threads would need to be set up to keep the processors fully utilized.

Processor utilization is not the only consideration in the thread pool sizing process. As your thread pool grows, you may encounter limits on the scheduler, available memory, or other system resources, such as the number of sockets, open file handles, or database connections.

5. Several commonly used thread pools

5.1 newCachedThreadPool

Create a cacheable thread pool. If the length of the thread pool exceeds processing needs, idle threads can be flexibly recycled. If there is no recycling, create a new thread.

The characteristics of this type of thread pool are:

• There is almost no limit on the number of worker threads created (in fact, there is a limit, the number is Interger. MAX_VALUE), so that the thread pool can be flexibly added Add thread in .

• If no task is submitted to the thread pool for a long time, that is, if the worker thread is idle for the specified time (default is 1 minute), the worker thread will automatically terminate. After termination, if you submit a new task, the thread pool will recreate a worker thread.

• When using CachedThreadPool, you must pay attention to controlling the number of tasks. Otherwise, due to a large number of threads running at the same time, the system may be paralyzed.

The sample code is as follows:

package test;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class ThreadPoolExecutorTest {
 public static void main(String[] args) {
 ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
 for (int i = 0; i < 10; i++) {
  final int index = i;
  try {
  Thread.sleep(index * 1000);
  } catch (InterruptedException e) {
  e.printStackTrace();
  }
  cachedThreadPool.execute(new Runnable() {
  public void run() {
   System.out.println(index);
  }
  });
 }
 }
}

5.1 newFixedThreadPool

Create a thread pool with a specified number of worker threads. Whenever a task is submitted, a worker thread is created. If the number of worker threads reaches the initial maximum number of the thread pool, the submitted task is stored in the pool queue.

FixedThreadPool is a typical and excellent thread pool. It has the advantages of a thread pool improving program efficiency and saving overhead when creating threads. However, when the thread pool is idle, that is, when there are no runnable tasks in the thread pool, it will not release the worker threads and will also occupy certain system resources.

The sample code is as follows:

package test;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class ThreadPoolExecutorTest {
 public static void main(String[] args) {
 ExecutorService fixedThreadPool = Executors.newFixedThreadPool(3);
 for (int i = 0; i < 10; i++) {
  final int index = i;
  fixedThreadPool.execute(new Runnable() {
  public void run() {
   try {
   System.out.println(index);
   Thread.sleep(2000);
   } catch (InterruptedException e) {
   e.printStackTrace();
   }
  }
  });
 }
 }
}

Because the thread pool size is 3, each task sleeps for 2 seconds after outputting the index, so every two seconds Print 3 numbers.

The size of the fixed-length thread pool is best set according to system resources such as Runtime.getRuntime().availableProcessors().

5.1 newSingleThreadExecutor

Create a single-threaded Executor, that is, only create a unique worker thread to execute tasks. It will only use the only worker thread to execute tasks, ensuring that all tasks follow the Execution in specified order (FIFO, LIFO, priority). If this thread ends abnormally, another one will replace it to ensure sequential execution. The biggest feature of a single worker thread is that it can ensure that tasks are executed sequentially, and no multiple threads are active at any given time.

The sample code is as follows:

package test;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class ThreadPoolExecutorTest {
 public static void main(String[] args) {
 ExecutorService singleThreadExecutor = Executors.newSingleThreadExecutor();
 for (int i = 0; i < 10; i++) {
  final int index = i;
  singleThreadExecutor.execute(new Runnable() {
  public void run() {
   try {
   System.out.println(index);
   Thread.sleep(2000);
   } catch (InterruptedException e) {
   e.printStackTrace();
   }
  }
  });
 }
 }
}

5.1 newScheduleThreadPool

Creates a fixed-length thread pool and supports timing and Periodic task execution supports scheduled and periodic task execution.

Delay execution for 3 seconds. The sample code for delayed execution is as follows:

package test;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class ThreadPoolExecutorTest {
 public static void main(String[] args) {
 ScheduledExecutorService scheduledThreadPool = Executors.newScheduledThreadPool(5);
 scheduledThreadPool.schedule(new Runnable() {
  public void run() {
  System.out.println("delay 3 seconds");
  }
 }, 3, TimeUnit.SECONDS);
 }
}

means it will be executed every 3 seconds after a delay of 1 second. The sample code for regular execution is as follows:

package test;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class ThreadPoolExecutorTest {
 public static void main(String[] args) {
 ScheduledExecutorService scheduledThreadPool = Executors.newScheduledThreadPool(5);
 scheduledThreadPool.scheduleAtFixedRate(new Runnable() {
  public void run() {
  System.out.println("delay 1 seconds, and excute every 3 seconds");
  }
 }, 1, 3, TimeUnit.SECONDS);
 }
}

The above article briefly discusses the comparison of several thread pools commonly used in Java, which is all the content shared by the editor. I hope it can give you a reference, and I also hope that everyone will support the PHP Chinese website.

For more articles on the comparison of several commonly used thread pools in Java, please pay attention to the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn