The process is the operating system for resource allocation. The smallest unit, including: CPU, memory space, disk IO, etc.. Multiple threads in the same process share all system resources in the process, and processes are directly independent of each other. A process is a program with certain independent functions that performs a running activity on a certain data set. A process is an independent unit for resource allocation and scheduling in the system.
A process is an execution activity of a program on the computer. When you run a program, you start a process. Obviously, programs are dead and static, while processes are active and dynamic. Processes can be divided into system processes and user processes. All processes used to complete various functions of the operating system are system processes. They are the operating system itself in a running state. User processes are all processes started by you.
Thread is an entity of the process and the basic unit of CPU scheduling and dispatching. It is a smaller than usual basic unit capable of operating independently. The thread itself basically does not own system resources, only a few resources that are essential for running (such as a program counter, a set of registers, and a stack), but it can share all the resources owned by the process with other threads in the same process.
Any program must create a thread, especiallyJavaNo matter any program must start a main function The main thread; Java web developed scheduled tasks, timers, JSP and Servlet, asynchronous message processing mechanism, remote access interface RM, etc., any monitoring Events, onClick trigger events, etc. are all inseparable from the knowledge of threads and concurrency.
Multi-core: also refers to single-chip multi-processor (Chip Multiprocessors, referred to as CMP), CMP was proposed by Stanford University in the United States. Its idea is to combine SMP (symmetric processor) in large-scale parallel processors Integrated into the same chip, each processor executes different processes in parallel. This kind of program that relies on multiple CPU to run in parallel at the same time is an important direction in achieving ultra-high-speed computing, which is called parallel processing.
Multi-threading:Simultaneous Multithreading. Referred to as SMT. Let multiple threads on the same processor execute simultaneously and Shared processor execution resources.
Number of cores and threads: The current mainstream CPU are all multi-core. The purpose of increasing the number of cores is to increase the number of threads, because the operating system performs tasks through threads. Generally, they have a 1:1 correspondence, which means Quad-core CPU Generally has four threads. However, after Intel introduced hyper-threading technology, the number of cores and the number of threads formed a 1:2 relationship.
When we usually develop, we feel that we are not limited by the number of CPU cores. We can start threads whenever we want. Threads, even on a single-core CPU, why? This is because the operating system provides a CPU time slice rotation mechanism.
Time slice rotation scheduling is one of the oldest, simplest, fairest and most widely used algorithms, also known as RR scheduling. Each process is assigned a time period, called its time slice, which is the time during which the process runs.
Let’s take an example. If there is a highway A with 4 lanes, then the maximum number of parallel vehicles is 4. This When the number of vehicles running side by side on a highway is less than 4, the vehicles can run in parallel. CPU is also based on this principle. One CPU is equivalent to a highway, and the number of cores or threads is equivalent to the number of vehicles that can pass side by side; and multiple CPU It is equivalent to having multiple highways, each with multiple lanes side by side.
When talking about concurrency, we must add unit time, that is to say, what is the amount of concurrency per unit time? There is actually no point in leaving the unit time.
As the saying goes, you can't do two things at once, and this is the same for computers. In principle, a CPU can only be allocated to one process in order to run this process. The computer we usually use has only one CPU, which means it has only one heart. To allow it to multi-purpose and run multiple processes at the same time, concurrency technology must be used. Implementing concurrency technology is quite complex, and the easiest to understand is the "time slice rotation process scheduling algorithm".
Concurrency:
means that the application can alternately execute different tasks. For example, executing multiple threads under a single CPU core does not mean executing multiple tasks at the same time. , if you open two threads for execution, you will continue to switch between executing these two tasks at an almost imperceptible speed to achieve the "simultaneous execution" effect. It's just that the computer's execution speed is too fast for us to notice.
Parallel: means that the application can perform different tasks at the same time. For example: you can eat and watch TV at the same time. These two things can be done at the same time. implement.
**The difference between concurrency and parallelism is: One is executed alternately, the other is executed simultaneously**
Due to the birth of multi-coreCPU, multi-threaded and high-concurrency programming has received more and more attention and attention.
From the introduction of CPU above, it can be seen that there is no CPU core on the market that does not use many Thread concurrency mechanism, especially when the server has more than one CPU. The basic scheduling unit of a program is a thread. A thread can only run in one thread of one core of a CPU. If you have an i3 CPU, the worst it can do is have the computing power of dual cores and 4 threads: if it is a thread, That will waste 3/4 of the CPU performance: if you design a multi-thread, it can run on multiple threads of multiple cores of multiple CPUs at the same time, making full use of the CPU and reducing the idle time of the CPU. , exert its computing power and increase concurrency.
For example, for the download function we often use, many friends will open a certain membership, because the membership version enables multiple threads to download, and no one can tolerate one Thread down, why? Because multi-threaded downloading is fast.
When we do program development, the web page speed is increased by 1 second. If the number of users is large, it can increase the number of conversions. In the web pages we often browse, when the browser loads the page, it will open a few more threads to load network resources to improve the corresponding speed of the website. Multi-threading and high concurrency are everywhere in computers.
For example, if we do an e-commerce project, placing orders and sending text messages and emails to users can be split. The two steps of sending text messages and emails to users are separated into two separate modules and handed over to other threads for execution. This not only increases asynchronous operations, improves system performance, but also makes the program modular, clear and simple.
Multiple threads in the same process share resources, that is, they can all access them. A variable at the same memory address.
For example: If there are only read operations on global variables and static variables in each thread, but no write operations, generally speaking, this global variable is thread-safe; if multiple threads execute it at the same time Write operations generally need to consider thread synchronization, otherwise thread safety may be affected.
In order to solve the security problem between threads, the Java lock mechanism is introduced, but if you are not careful, a Java thread deadlock will occur. Multi-threading problem, because different threads are waiting for locks that cannot be released at all, resulting in all work being unable to be completed.
Suppose there are two hungry people who must share a knife and fork and take turns eating. They both need to obtain two locks, sharing the knife and sharing the fork. Suppose thread A gets the knife and thread B gets the fork. Thread A will enter the blocking state to wait for the fork, while thread B will wait for the knife owned by thread A. This is just a contrived example, and although it is difficult to detect at runtime, this kind of situation happens all the time.
Too many threads may cause the system to create a large number of threads, which will consume all the system memory and The "transition switching" of the CPU causes the system to crash. So how can we solve this kind of problem?
Some system resources are limited, such as file descriptions. Multithreaded programs can run out of resources because each thread may want one such resource. If the number of threads is quite large, or the number of candidate threads for a resource far exceeds the number of available resources, it is best to use a resource pool. One of the best examples is database connection pooling. Whenever a thread needs to use a database connection, it takes one from the pool and returns it to the pool after use. The resource pool is also called the resource library.
There are many things to pay attention to when developing multi-threaded applications. I hope everyone can slowly understand its dangers in future work.
Threads cooperate with each other to complete a certain work. For example: one thread modifies the value of an object, and another thread senses the change and then performs the corresponding operation. , the entire process starts in one thread, and the final execution is in another thread. The former is the producer, and the latter is the consumer. This model isolates "what" and "how". The simple way is to let the consumer thread continuously loop to check whether the variable meets expectations in while Set unsatisfied conditions in the loop. If the conditions are met, exit the while loop to complete the consumer's work.
But there are the following problems:
It is difficult to ensure timeliness.
It is difficult to reduce overhead. If the sleep time is reduced, such as 1 millisecond, consumers can detect changes in conditions more quickly, but it may consume more processor resources, causing unnecessary waste.
Waiting/notification mechanism: means that one thread A calls the wait() method of object O and enters the waiting state, while another thread B calls object O notify() or notifyAll() method, thread A returns from the wait() method of object O after receiving the notification, and then performs subsequent operations. The above two threads complete the interaction through the object O, and the relationship between wait() and notify/notifyAll() on the object is like a switch signal, which is used to complete the interaction between the waiting party and the notifying party.
notify(): Notify a thread waiting on the object to return from the wait method. The premise of return is that the thread has acquired the object's lock, and there is no thread that has acquired the lock. Re-enter the WAITING state.
notifyAll(): Notify all threads waiting on the object
wait():The thread calling this method enters the WAITING state, It will only return if it waits for notification from another thread or is interrupted. It should be noted that after calling the wait() method, the object's lock will be released.
wait(long): Timeout and wait for a period of time. The parameter time here is milliseconds, which means waiting for up to n milliseconds. If there is no notification, it will timeout and return.
wait (long, int): For more fine-grained control of the timeout, it can reach nanoseconds
The standard paradigm of waiting and notification is followed by the waiting party The following principles.
Get the object's lock.
If the condition is not met, then call the wait() method of the object, and still check the condition after being notified.
If the conditions are met, the corresponding logic will be executed.
The notifying party follows the following principles:
Obtain the lock of the object.
Change conditions.
Notify all threads waiting on the object.
Before calling the wait() and notify() series methods, the thread must obtain the object-level lock of the object, that is, it can only be synchronized Call the wait() method and notify() series of methods in a method or synchronized block. After entering the wait() method, the current thread releases the lock. Before returning from wait(), the thread competes with other threads to reacquire the lock and execute notify() After the threads of the series of methods exit the synchronized code block that called notifyAll, they will compete. If one of the threads acquires the object lock, it will continue to execute. After it exits the synchronized code block and releases the lock, other threads that have been awakened will continue to compete to acquire the lock, and continue until all All awakened threads have completed execution.
Who should use notify and notifyAll
Use notifyAll() as much as possible, use notify() with caution, because notify() will only wake up one thread, we cannot ensure The thread that is awakened must be the thread we need to wake up.
The above is the detailed content of Multi-threaded shared variables and collaboration mechanism in Java. For more information, please follow other related articles on the PHP Chinese website!