Home >Java >javaTutorial >Java race conditions and critical sections
A race condition is a special condition that may occur inside a critical section (critical section). A critical section is a section of code that is being executed by multiple threads, and the order of thread execution affects the results of concurrent execution of the critical section.
When multiple threads execute a critical section, the result may be different depending on the order of thread execution. This critical section contains a race condition. The term for this race condition comes from the metaphor that the thread is racing through the critical section, and the outcome of this race affects the outcome of executing the critical section.
This may sound a bit complicated, so I will elaborate on race conditions and critical sections in the following sections.
Critical Sections
Running more than one thread within the same application will not Caused problems by himself. Problems arise when multiple threads access the same resource. For example, the same memory (variable, array, or object), system (database, web service) or file.
In fact, problems may arise if one or more threads write to these resources. It is safe to have multiple threads reading the same resource, as long as the resource does not change.
Here is an example that may fail if multiple threads are executing at the same time:
public class Counter { protected long count = 0; public void add(long value){ this.count = this.count + value; } }
Imagine if threads A and B are executing the same The add method of an instance of the Counter class. There is no way to know when the operating system will switch between threads. The code in the add method will not be executed by the Java virtual machine as a separate atomic instruction. Instead, it is executed as a series of smaller instruction sets, similar to this:
Read the this.count value from memory into the register.
Add value to the register.
Write the value in the register back to memory.
Observe what happens when threads A and B are executed mixedly:
this.count = 0; A: Reads this.count into a register (0) B: Reads this.count into a register (0) B: Adds value 2 to register B: Writes register value (2) back to memory. this.count now equals 2 A: Adds value 3 to register A: Writes register value (3) back to memory. this.count now equals 3
These two threads want to add 2 and 3 to counter middle. Therefore, the value after these two threads are executed should be 5. However, because the two threads are interleaved in their execution, the results end up being different.
In the execution sequence example mentioned above, both threads read the value 0 from memory. Then they add their respective values, 2 and 3, to that value, and write the result back to memory. Instead of 5, the value left in this.count will be the value the last thread wrote to it. In the above example it is Thread A, but it could also be Thread B.
Race conditions in critical sections
In the above example, the code of the add method contains a critical section. When multiple threads execute this critical section, a race condition will occur.
More formally speaking, this situation of two threads competing for the same resource, and the order in which the resources are accessed is important, is called a race condition. A section of code that causes a race condition is called a critical section.
Preventing race conditions
To prevent race conditions from occurring, you must ensure that the critical section being executed is executed as an atomic instruction . That means that once a single thread is executing it, other threads cannot execute it until the first thread has left the critical section.
Race conditions can be avoided by using thread synchronization in critical sections. Thread synchronization can be obtained using a synchronization lock in Java code. Thread synchronization can also be achieved using other synchronization concepts, such as locks or atomic variables like java.util.concurrent.atomic.AtomicInteger.
Throughput of critical section
For smaller critical sections, a synchronization lock for the entire critical section may work. However, for larger critical sections, it makes more sense to break it into smaller critical sections, allowing multiple threads to execute each smaller critical section. It is possible to reduce contention for shared resources and increase throughput of the entire critical section.
Here is a very simple Java example:
public class TwoSums { private int sum1 = 0; private int sum2 = 0; public void add(int val1, int val2){ synchronized(this){ this.sum1 += val1; this.sum2 += val2; } } }
Note how this add method adds values to the two sum variables. To prevent race conditions, the summation performed internally has a Java synchronization lock. With this implementation, only one thread can perform this summation at a time.
However, because the two sum variables are independent of each other, you can separate them into two separate synchronization locks, like this:
public class TwoSums { private int sum1 = 0; private int sum2 = 0; public void add(int val1, int val2){ synchronized(this){ this.sum1 += val1; } synchronized(this){ this.sum2 += val2; } } }
Note that two threads can execute this add method at the same time. One thread acquires the first synchronization lock, and another thread acquires the second synchronization lock. This way, threads will wait less time between each other.
Of course, this example is very simple. In real life, critical section separation of shared resources may be more complex and require more analysis of execution order possibilities.
The above is the content of Java race conditions and critical sections. For more related content, please pay attention to the PHP Chinese website (www.php.cn)!