Home >Backend Development >C#.Net Tutorial >C# implements the thread pool function by itself (1)
Technical background of thread pool
In object-oriented programming, creating and destroying objects is very time-consuming. Because creating an object requires obtaining memory resources or other more resources, it is necessary to improve the efficiency of the service program. One way is to reduce the number of object creation and destruction as much as possible, especially the creation and destruction of some resource-intensive objects. How to use existing objects to serve is a key issue that needs to be solved. In fact, this is the reason for the emergence of some "pooled resource" technologies. For example, the familiar database connection pool was created based on this idea. The thread pool technology introduced in this article also conforms to this idea.
How thread pool technology improves the performance of server programs
The server program I mentioned refers to a program that can accept client requests and process requests, not just those that accept network requests. The web server program requested by the client.
Multi-threading technology mainly solves the problem of multiple thread execution in the processor unit. It can significantly reduce the idle time of the processor unit and increase the throughput capacity of the processor unit. However, if multi-threading is not applied properly, it will increase the processing time of a single task. A simple example can be given:
Assume that the time to complete a task on a server is T
T1 创建线程的时间 T2 在线程中执行任务的时间,包括线程间同步所需时间 T3 线程销毁的时间
Obviously T = T1 + T2 + T3. Note that this is an extremely simplified assumption.
It can be seen that T1 and T3 are the overhead caused by multi-threading itself. We are eager to reduce the time spent on T1 and T3, thereby reducing the time of T. However, some thread users did not notice this, so threads were frequently created or destroyed in the program, which resulted in T1 and T3 occupying a considerable proportion of T. Obviously this highlights the thread's weaknesses (T1, T3) rather than its strengths (concurrency).
Thread pool technology focuses on how to shorten or adjust the T1 and T3 times, thereby improving the performance of the server program. It arranges T1 and T3 respectively in the startup and end time periods of the server program or some idle time periods, so that when the server program processes customer requests, there will be no overhead of T1 and T3.
The thread pool not only adjusts the time period during which T1 and T3 are generated, but it also significantly reduces the number of threads created. Looking at an example:
Suppose a server has to handle 50,000 requests a day, and each request requires a separate thread to complete. We compare the total number of threads generated by servers handling these requests with and without thread pool technology. In the thread pool, the number of threads is generally fixed, so the total number of threads generated will not exceed the number or upper limit of threads in the thread pool (hereinafter referred to as the thread pool size). If the server does not use the thread pool to process these requests, the total number of threads will be 50,000. The general thread pool size is much less than 50,000. Therefore, the server program that uses the thread pool will not waste time in processing requests in order to create 50,000, thereby improving efficiency.
These are all assumptions and cannot fully explain the problem. Below I will discuss the simple implementation of the thread pool and conduct a comparative test on the program to illustrate the advantages and application fields of thread technology.
Simple implementation and comparison test of thread pool
Generally, a simple thread pool contains at least the following components.
Thread Pool Manager (ThreadPoolManager): used to create and manage thread pools
Worker thread (WorkThread): threads in the thread pool
Task interface (Task): each An interface that a task must implement in order for the worker thread to schedule the execution of the task.
Task queue: used to store unprocessed tasks. Provide a buffering mechanism.
Next I demonstrated the simplest thread pool. No optimization has been done.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Collections; using System.Threading; namespace ThreadManager { public class ThreadPoolManager { private int MaxThreadNum; private int MinThreadNum; private int GrowStepNum; //线程数量 public int ThreadNum{get;set;} //默认线程数量 public int DefaultThreadNum { get; set; } private Queue<task> TaskQueue; private Queue<workthread> WorkThreadList; public ThreadPoolManager(int i) { TaskQueue = new Queue<task>(); WorkThreadList = new Queue<workthread>(); DefaultThreadNum = 10; if (i > 0) DefaultThreadNum = i; CreateThreadPool(i); } public ThreadPoolManager():this(10) { } public bool IsAllTaskFinish() { return TaskQueue.Count == 0; } public void CreateThreadPool(int i) { if (WorkThreadList == null) WorkThreadList = new Queue<workthread>(); lock (WorkThreadList) { for (int j = 0; j < i;j++) { ThreadNum++; WorkThread workthread = new WorkThread(ref TaskQueue,ThreadNum); WorkThreadList.Enqueue(workthread); } } } public void AddTask(Task task) { if (task == null) return; lock (TaskQueue) { TaskQueue.Enqueue(task); } //Monitor.Enter(TaskQueue); //TaskQueue.Enqueue(task); //Monitor.Exit(TaskQueue); } public void CloseThread() { //Object obj = null; while (WorkThreadList.Count != 0) { try { WorkThread workthread = WorkThreadList.Dequeue(); workthread.CloseThread(); continue; } catch (Exception) { } break; } } } } </workthread></workthread></task></workthread></task>
Worker thread class
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; namespace ThreadManager { public class WorkThread { public int ThreadNum { get; set; } private bool flag; private Queue<task> TaskQueue; private Task task; public WorkThread(ref Queue<task> queue, int i) { this.TaskQueue = queue; ThreadNum = i; flag = true; new Thread(run).Start(); } public void run() { while (flag && TaskQueue != null) { //获取任务 lock (TaskQueue) { try { task = TaskQueue.Dequeue(); } catch (Exception) { task = null; } if (task == null) continue; } try { task.SetEnd(false); task.StartTask(); } catch (Exception) { } try { if (!task.IsEnd()) { task.SetEnd(false); task.EndTask(); } } catch (Exception) { } }//end of while } public void CloseThread() { flag = false; try { if (task != null) task.EndTask(); } catch (Exception) { } } } }</task></task>
task class and implementation class
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ThreadManager { public interface Task { /// <summary> /// set flag of task. /// </summary> void SetEnd(bool flag); /// <summary> /// start task. /// </summary> void StartTask(); /// <summary> /// end task. /// </summary> void EndTask(); /// <summary> /// get status of task. /// </summary> /// <returns></returns> bool IsEnd(); } } using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; namespace ThreadManager { public class TestTask:Task { private bool is_end; public void SetEnd(bool flag) { is_end = flag; } public void StartTask() { Run(); } public void EndTask() { is_end = true; Console.WriteLine(Thread.CurrentThread.ManagedThreadId + ":"+"结束!"); } public bool IsEnd() { return is_end; } public void Run() { for (int i = 0; i < 1000; i++) { Console.WriteLine(Thread.CurrentThread.ManagedThreadId+":"+i); } } } }
The problem with this simple model is that many times it is necessary to obtain the TASK After constant attempts, the performance has dropped very low. The method that needs improvement is to add a semaphore mechanism to prevent the program from idling!
In the next article I will optimize it so that the thread pool can truly improve efficiency!
The above is the content of C#’s own implementation of the thread pool function (1). For more related content, please pay attention to the PHP Chinese website (www.php.cn)!