Home  >  Article  >  Java  >  How to use volatile keyword in Java

How to use volatile keyword in Java

WBOY
WBOYforward
2023-04-22 15:55:161236browse

1. Related concepts of memory model

As we all know, when a computer executes a program, each instruction is executed in the CPU, and the process of executing instructions inevitably involves the reading and writing of data. Since the temporary data during the running of the program is stored in the main memory (physical memory), there is a problem. Because the CPU execution speed is very fast, the process of reading data from the memory and writing data to the memory is different from that of the CPU. The speed of executing instructions is much slower, so if the operation of data must be performed through interaction with memory at any time, the speed of instruction execution will be greatly reduced. So there is a cache in the CPU.

That is, when the program is running, the data required for the operation will be copied from the main memory to the CPU's cache. Then the CPU can directly read data from its cache and write to it when performing calculations. After the operation is completed, the data in the cache is refreshed to the main memory. Take a simple example, such as the following code:

1i = i 1;

When the thread executes this statement, it will first read the value of i from the main memory, and then copy a copy to the cache. Then the CPU executes the instruction to increment i by 1, then writes the data to the cache, and finally writes the value to the cache. The latest value of i in the cache is flushed to main memory.

There is no problem with this code running in a single thread, but there will be problems when running in multiple threads. In a multi-core CPU, each thread may run in a different CPU, so each thread has its own cache when running (for single-core CPUs, this problem actually also occurs, but it is scheduled by thread. forms to execute separately). In this article, we take multi-core CPU as an example.

For example, there are two threads executing this code at the same time. If the value of i is 0 initially, then we hope that the value of i will become 2 after the execution of the two threads. But will this be the case?

There may be the following situation: initially, the two threads respectively read the value of i and store it in the cache of their respective CPUs. Then thread 1 adds 1, and then writes the latest value 1 of i into the memory. At this time, the value of i in the cache of thread 2 is still 0. After adding 1, the value of i is 1, and then thread 2 writes the value of i into the memory.

The final value of i is 1, not 2. This is the famous cache consistency problem. Variables that are accessed by multiple threads are usually called shared variables.

In other words, if a variable is cached in multiple CPUs (usually occurs in multi-threaded programming), then there may be a cache inconsistency problem.

In order to solve the cache inconsistency problem, there are usually two solutions:

1) By adding LOCK# to the bus

2) Through cache consistency protocol

These two methods are provided at the hardware level.

In early CPUs, the problem of cache inconsistency was solved by adding LOCK# locks on the bus. Because the communication between the CPU and other components is carried out through the bus, if you add LOCK# to the bus, it means that other CPUs are blocked from accessing other components (such as memory), so that only one CPU can use this variable memory. For example, in the above example, if a thread is executing i = i 1, and if the LCOK# lock signal is sent on the bus during the execution of this code, then other CPUs can only wait for this code to be completely executed. The memory where variable i is located reads the variable and then performs the corresponding operation. This solves the problem of cache inconsistency.

However, there is a problem with the above method. During the period of locking the bus, other CPUs cannot access the memory, resulting in low efficiency.

So the cache coherence protocol appeared. The most famous one is Intel's MESI protocol, which ensures that the copy of shared variables used in each cache is consistent. Its core idea is: when the CPU writes data, if it finds that the variable being operated is a shared variable, that is, a copy of the variable also exists in other CPUs, a signal will be sent to notify other CPUs to set the cache line of the variable to an invalid state. Therefore, when other CPUs need to read this variable and find that the cache line that caches this variable in their own cache is invalid, then it will re-read it from memory.

2. Three concepts in concurrent programming

In concurrent programming, we usually encounter the following three problems: atomicity problem, visibility problem, and ordering problem. Let’s first look at these three concepts in detail:

1.Atomicity

Atomicity: that is, one operation or multiple operations are either fully executed and the execution process is not interrupted by any factors, or they are not executed at all.

A very classic example is the bank account transfer problem:

For example, transferring 1,000 yuan from account A to account B must include two operations: subtracting 1,000 yuan from account A and adding 1,000 yuan to account B.

Just imagine what the consequences would be if these two operations were not atomic. Suppose that after deducting 1,000 yuan from account A, the operation is suddenly terminated. Then he withdrew 500 yuan from B. After withdrawing 500 yuan, he then performed the operation of adding 1,000 yuan to account B. This will result in that although account A is deducted by 1,000 yuan, account B does not receive the transferred 1,000 yuan.

Therefore, these two operations must be atomic to ensure that no unexpected problems occur.

What will be the consequences of the same reflection in concurrent programming?

To give the simplest example, think about what would happen if the assignment process to a 32-bit variable is not atomic?

1i = 9;

If a thread executes this statement, I will temporarily assume that assigning a value to a 32-bit variable involves two processes: assigning a value to the lower 16 bits and assigning a value to the upper 16 bits.

Then a situation may occur: when the low 16-bit value is written, it is suddenly interrupted, and at this time another thread reads the value of i, then the wrong data is read.

2. Visibility

Visibility means that when multiple threads access the same variable, if one thread modifies the value of the variable, other threads can immediately see the modified value.

For a simple example, look at the following code:

//Code executed by thread 1

int i = 0;

i = 10;

//Code executed by thread 2

j = i;

If thread 1 is executed by CPU1, thread 2 is executed by CPU2. From the above analysis, it can be seen that when thread 1 executes the sentence i = 10, it will first load the initial value of i into the cache of CPU1, and then assign it to 10, then the value of i in the cache of CPU1 becomes 10 , but it is not written to the main memory immediately.

At this time, thread 2 executes j = i. It will first go to the main memory to read the value of i and load it into the cache of CPU2. Note that the value of i in the memory is still 0 at this time, then the value of j will be 0, and Not 10.

This is a visibility problem. After thread 1 modifies variable i, thread 2 does not immediately see the value modified by thread 1.

3. Orderliness

Orderliness: That is, the order of program execution is executed in the order of code. For a simple example, look at the following code:

int i = 0;

boolean flag = false;

i = 1; //Statement 1

flag = true; //Statement 2

The above code defines an int type variable and a boolean type variable, and then assigns values ​​to the two variables respectively. Judging from the code sequence, statement 1 is before statement 2. So when the JVM actually executes this code, will it guarantee that statement 1 will be executed before statement 2? Not necessarily, why? Instruction Reorder may occur here.

Let’s explain what instruction reordering is. Generally speaking, in order to improve the efficiency of program operation, the processor may optimize the input code. It does not guarantee that the execution order of each statement in the program is consistent with the order in the code, but it will Ensure that the final execution result of the program is consistent with the result of the sequential execution of the code.

For example, in the above code, whether statement 1 or statement 2 is executed first has no impact on the final program result. Then it is possible that during execution, statement 2 is executed first and statement 1 is executed later.

But it should be noted that although the processor will reorder the instructions, it will ensure that the final result of the program will be the same as the sequential execution result of the code. So what guarantee does it rely on? Look at the following example:

int a = 10; //Statement 1

int r = 2; //Statement 2

a = a 3; //Statement 3

r = a*a; //Statement 4

This code has 4 statements, so a possible execution order is:

So is it possible that this is the execution order: Statement 2 Statement 1 Statement 4 Statement 3

Impossible, because the processor will consider the data dependencies between instructions when reordering. If an instruction, Instruction 2, must use the result of Instruction 1, then the processor will ensure that Instruction 1 will be executed before Instruction 2.

Although reordering will not affect the results of program execution within a single thread, what about multi-threading? Let’s look at an example:

//Thread 1:

context = loadContext(); //Statement 1

inited = true; //Statement 2

//Thread 2:

while(!initited ){

sleep()

}

doSomethingwithconfig(context);

In the above code, since statement 1 and statement 2 have no data dependency, they may be reordered. If a reordering occurs, statement 2 is executed first during the execution of thread 1, and thread 2 will think that the initialization work has been completed, then it will jump out of the while loop and execute the doSomethingwithconfig(context) method, but at this time the context does not exist. Being initialized will cause a program error.

As can be seen from the above, instruction reordering will not affect the execution of a single thread, but will affect the correctness of concurrent execution of threads.

In other words, for concurrent programs to execute correctly, atomicity, visibility, and orderliness must be guaranteed. As long as one of them is not guaranteed, it may cause the program to run incorrectly.

3. Java memory model

Earlier I talked about some problems that may arise in memory models and concurrent programming. Let's take a look at the Java memory model and study what guarantees the Java memory model provides us and what methods and mechanisms are provided in Java to ensure the correctness of program execution when performing multi-threaded programming.

The Java Virtual Machine Specification attempts to define a Java Memory Model (JMM) to shield the memory access differences between various hardware platforms and operating systems, so that Java programs can achieve consistent memory access on various platforms. Effect. So what does the Java memory model stipulate? It defines the access rules for variables in the program. To a larger extent, it defines the order of program execution. Note that in order to obtain better execution performance, the Java memory model does not restrict the execution engine from using the processor's registers or cache to improve instruction execution speed, nor does it restrict the compiler from reordering instructions. In other words, in the Java memory model, there will also be cache consistency issues and instruction reordering issues.

The Java memory model stipulates that all variables are stored in main memory (similar to the physical memory mentioned earlier), and each thread has its own working memory (similar to the previous cache). All operations on variables by threads must be performed in working memory and cannot directly operate on main memory. And each thread cannot access the working memory of other threads.

To give a simple example: in java, execute the following statement:

1i = 10;

The execution thread must first assign the cache line where variable i is located in its own working thread, and then write it to the main memory. Instead of writing the value 10 directly into the main memory.

So what guarantees does the Java language itself provide for atomicity, visibility, and ordering?

1.Atomicity

In Java, reading and assigning operations to variables of basic data types are atomic operations, that is, these operations cannot be interrupted and are either executed or not.

Although the above sentence seems simple, it is not so easy to understand. Look at the following example i:

Please analyze which of the following operations are atomic operations:

x = 10; //Statement 1

y = x; //Statement 2

x; //Statement 3

x = x 1; //Statement 4

At first glance, some friends may say that the operations in the above four statements are all atomic operations. In fact, only statement 1 is an atomic operation, and the other three statements are not atomic operations.

Statement 1 directly assigns the value 10 to x, which means that the thread executing this statement will directly write the value 10 into the working memory.

Statement 2 actually contains two operations. It first reads the value of x, and then writes the value of x to the working memory. Although the two operations of reading the value of x and writing the value of x to the working memory are Atomic operations, but together they are not atomic operations.

Similarly, x and x = x 1 include 3 operations: reading the value of x, adding 1, and writing the new value.

Therefore, among the four statements above, only the operation of statement 1 is atomic.

In other words, only simple reading and assignment (and the number must be assigned to a variable, mutual assignment between variables is not an atomic operation) are atomic operations.

However, there is one thing to note here: on a 32-bit platform, reading and assigning 64-bit data requires two operations, and its atomicity cannot be guaranteed. But it seems that in the latest JDK, the JVM has guaranteed that reading and assigning 64-bit data are also atomic operations.

As can be seen from the above, the Java memory model only guarantees that basic reading and assignment are atomic operations. If you want to achieve atomicity for a larger range of operations, you can achieve it through synchronized and Lock. Since synchronized and Lock can ensure that only one thread executes the code block at any time, there is no atomicity problem, thus ensuring atomicity.

2. Visibility

For visibility, Java provides the volatile keyword to ensure visibility.

When a shared variable is modified volatile, it will ensure that the modified value will be updated to the main memory immediately. When other threads need to read it, it will read the new value from the memory.

Ordinary shared variables cannot guarantee visibility, because after an ordinary shared variable is modified, it is uncertain when it will be written to the main memory. When other threads read it, the original old value may still be in the memory at this time, so Visibility is not guaranteed.

In addition, visibility can also be guaranteed through synchronized and Lock. Synchronized and Lock can ensure that only one thread acquires the lock at the same time and then executes the synchronization code, and the modifications to the variables are flushed to the main memory before releasing the lock. Visibility is therefore guaranteed.

3. Orderliness

In the Java memory model, the compiler and processor are allowed to reorder instructions, but the reordering process will not affect the execution of single-threaded programs, but will affect the correctness of multi-threaded concurrent execution.

In Java, you can use the volatile keyword to ensure a certain "orderliness" (the specific principle is described in the next section). In addition, orderliness can be ensured through synchronized and Lock. Obviously, synchronized and Lock ensure that one thread executes synchronization code at each moment, which is equivalent to letting threads execute synchronization code sequentially, which naturally ensures orderliness.

In addition, the Java memory model has some innate "orderliness", that is, orderliness that can be guaranteed without any means. This is often called the happens-before principle. If the execution order of two operations cannot be deduced from the happens-before principle, then their ordering is not guaranteed, and the virtual machine can reorder them at will.

Let’s introduce the happens-before principle in detail:

Program sequence rules: Within a thread, according to the code order, operations written in the front occur before operations written in the back

Locking rules: An unLock operation occurs first before a subsequent lock operation with the same lock

Volatile variable rules: a write operation to a variable occurs before a subsequent read operation to the variable

Delivery rule: If operation A occurs before operation B, and operation B occurs before operation C, then it can be concluded that operation A occurs before operation C

Thread startup rules: The start() method of the Thread object occurs first for every action of this thread

Thread interruption rules: The call to the thread interrupt() method occurs first when the code of the interrupted thread detects the occurrence of the interrupt event

Thread termination rules: All operations in a thread occur first when the thread is terminated. We can detect that the thread has terminated through the Thread.join() method and the return value of Thread.isAlive().

Object finalization rule: The initialization of an object occurs first at the beginning of its finalize() method

These 8 principles are excerpted from "In-depth Understanding of the Java Virtual Machine".

Among these 8 rules, the first 4 rules are more important, and the last 4 rules are obvious.

Let’s explain the first 4 rules:

For program order rules, my understanding is that the execution of a piece of program code appears to be orderly in a single thread. Note that although this rule mentions that "operations written in the front occur before operations written in the back", this should mean that the order in which the program appears to be executed is in the order of the code, because the virtual machine may perform changes to the program code. Instructions reordered. Although reordering is performed, the final execution result is consistent with the result of the program's sequential execution. It will only reorder instructions that do not have data dependencies. Therefore, in a single thread, program execution appears to be executed in order, which should be understood. In fact, this rule is used to ensure the correctness of the program execution results in a single thread, but it cannot guarantee the correctness of the program execution in multiple threads.

The second rule is also easier to understand. That is to say, whether in a single thread or multiple threads, if the same lock is locked, the lock must be released first before the lock operation can continue.

The third rule is a more important rule and will be the focus of the following article. The intuitive explanation is that if a thread writes a variable first, and then a thread reads it, then the write operation will definitely occur before the read operation.

The fourth rule actually reflects the transitive nature of the happens-before principle.

4. In-depth analysis of volatile keyword

I have talked about a lot of things before, but they all pave the way for talking about the volatile keyword, so let’s get into the topic next.

1.Two levels of semantics of the volatile keyword

Once a shared variable (class member variable, class static member variable) is modified volatile, it has two levels of semantics:

1) Ensures the visibility when different threads operate on this variable, that is, if one thread modifies the value of a variable, the new value is immediately visible to other threads.

2) Instruction reordering is prohibited.

Let’s look at a piece of code first. If thread 1 is executed first and thread 2 is executed later:

//Thread 1

boolean stop = false;

while(!stop){

doSomething();

}

//Thread 2

stop = true;

This code is a very typical piece of code, and many people may use this marking method when interrupting a thread. But in fact, will this code run completely correctly? That is, will the thread be interrupted? Not necessarily, maybe most of the time, this code can interrupt the thread, but it may also cause the thread to be unable to be interrupted (although this possibility is very small, but once this happens, it will cause an infinite loop).

Let's explain why this code may cause the thread to be unable to be interrupted. As explained before, each thread has its own working memory during running, so when thread 1 is running, it will copy the value of the stop variable and put it in its own working memory.

Then when thread 2 changes the value of the stop variable, but before it has time to write it into the main memory, thread 2 turns to do other things. Then thread 1 does not know the change of the stop variable by thread 2, so it will continue to loop. Go down.

But after using volatile modification, it becomes different:

First: Using the volatile keyword will force the modified value to be written to the main memory immediately;

Second: If the volatile keyword is used, when thread 2 makes a modification, it will cause the cache line of the cache variable stop in the working memory of thread 1 to be invalid (if reflected to the hardware layer, it is the corresponding cache line in the L1 or L2 cache of the CPU) invalid);

Third: Since the cache line of the cache variable stop in the working memory of thread 1 is invalid, thread 1 will go to the main memory to read the value of the variable stop again.

Then when thread 2 modifies the stop value (of course, this includes two operations, modifying the value in the working memory of thread 2, and then writing the modified value into the memory), it will cause the cache line of the variable stop to be cached in the working memory of thread 1. Invalid, then when thread 1 reads, it finds that its cache line is invalid. It will wait for the main memory address corresponding to the cache line to be updated, and then read the latest value from the corresponding main memory.

Then what thread 1 reads is the latest correct value.

2. Does volatile guarantee atomicity?

From the above, we know that the volatile keyword guarantees the visibility of operations, but can volatile guarantee that the operation of variables is atomic?

Let’s look at an example:

public class Test {

public volatile int inc = 0;

public void increase() {

inc ;

}

public static void main(String[] args) {

final Test test = new Test();

for(int i=0;i<10;i ){

new Thread(){

public void run() {

for(int j=0;j<1000;j )

test.increase();

};

}.start();

}

while(Thread.activeCount()>1) //Ensure that all previous threads have been executed

Thread.yield();

System.out.println(test.inc);

}

}

Think about it, what is the output of this program? Maybe some friends think it is 10,000. But in fact, when you run it, you will find that the results are inconsistent every time, and they are always a number less than 10,000.

Some friends may have questions, no, the above is an auto-increment operation on the variable inc. Since volatile guarantees visibility, after incrementing inc in each thread, it can be seen in other threads. The modified value, so 10 threads performed 1000 operations respectively, then the final value of inc should be 1000*10=10000.

There is a misunderstanding here. The volatile keyword can ensure that visibility is correct, but the error in the above program is that it fails to guarantee atomicity. Visibility can only ensure that the latest value is read every time, but volatile cannot guarantee the atomicity of operations on variables.

As mentioned before, the auto-increment operation is not atomic. It includes reading the original value of the variable, adding 1, and writing to the working memory. That means that the three sub-operations of the auto-increment operation may be executed separately, which may lead to the following situation:

If the value of variable inc is 10 at a certain time,

Thread 1 performs an auto-increment operation on the variable. Thread 1 first reads the original value of the variable inc, and then thread 1 is blocked;

Then thread 2 performs an auto-increment operation on the variable, and thread 2 also reads the original value of the variable inc. Since thread 1 only reads the variable inc and does not modify the variable, it will not cause the work of thread 2. The cache line of the cache variable inc in the memory is invalid, so thread 2 will directly go to the main memory to read the value of inc. It finds that the value of inc is 10, then adds 1, writes 11 to the working memory, and finally writes it to the main memory. .

Then thread 1 then adds 1. Since the value of inc has been read, note that the value of inc in the working memory of thread 1 is still 10 at this time, so the value of inc after thread 1 adds 1 to inc is 11. , then write 11 to the working memory, and finally to the main memory.

Then after the two threads each performed an auto-increment operation, inc only increased by 1.

After explaining this, some friends may have questions. No, isn't it guaranteed that when a variable modifies a volatile variable, the cache line will be invalid? Then other threads will read the new value, yes, this is correct. This is the volatile variable rule in the happens-before rule above, but it should be noted that after thread 1 reads the variable and is blocked, the inc value is not modified. Then although volatile can ensure that thread 2 reads the value of variable inc from memory, thread 1 does not modify it, so thread 2 will not see the modified value at all.

The root cause is here. The auto-increment operation is not an atomic operation, and volatile cannot guarantee that any operation on a variable is atomic.

Changing the above code to any of the following can achieve the effect:

Using synchronized:

public class Test {

public int inc = 0;

public synchronized void increase() {

inc ;

}

public static void main(String[] args) {

final Test test = new Test();

for(int i=0;i<10;i ){

new Thread(){

public void run() {

for(int j=0;j<1000;j )

test.increase();

};

}.start();

}

while(Thread.activeCount()>1) //Ensure that all previous threads have been executed

Thread.yield();

System.out.println(test.inc);

}

}

View Code

采用Lock:

public class Test {

public int inc = 0;

Lock lock = new ReentrantLock();

public void increase() {

lock.lock();

try {

inc ;

} finally{

lock.unlock();

}

}

public static void main(String[] args) {

final Test test = new Test();

for(int i=0;i<10;i ){

new Thread(){

public void run() {

for(int j=0;j<1000;j )

test.increase();

};

}.start();

}

while(Thread.activeCount()>1) //保证前面的线程都执行完

Thread.yield();

System.out.println(test.inc);

}

}

View Code

采用AtomicInteger:

public class Test {

public AtomicInteger inc = new AtomicInteger();

public void increase() {

inc.getAndIncrement();

}

public static void main(String[] args) {

final Test test = new Test();

for(int i=0;i<10;i ){

new Thread(){

public void run() {

for(int j=0;j<1000;j )

test.increase();

};

}.start();

}

while(Thread.activeCount()>1) //保证前面的线程都执行完

Thread.yield();

System.out.println(test.inc);

}

}

View Code

在java 1.5的java.util.concurrent.atomic包下提供了一些原子操作类,即对基本数据类型的 自增(加1操作),自减(减1操作)、以及加法操作(加一个数),减法操作(减一个数)进行了封装,保证这些操作是原子性操作。atomic是利用CAS来实现原子性操作的(Compare And Swap),CAS实际上是利用处理器提供的CMPXCHG指令实现的,而处理器执行CMPXCHG指令是一个原子性操作。

3.volatile能保证有序性吗?

在前面提到volatile关键字能禁止指令重排序,所以volatile能在一定程度上保证有序性。

volatile关键字禁止指令重排序有两层意思:

1)当程序执行到volatile变量的读操作或者写操作时,在其前面的操作的更改肯定全部已经进行,且结果已经对后面的操作可见;在其后面的操作肯定还没有进行;

2)在进行指令优化时,不能将在对volatile变量访问的语句放在其后面执行,也不能把volatile变量后面的语句放到其前面执行。

可能上面说的比较绕,举个简单的例子:

//x、y为非volatile变量

//flag为volatile变量

x = 2; //语句1

y = 0; //语句2

flag = true; //语句3

x = 4; //语句4

y = -1; //语句5

由于flag变量为volatile变量,那么在进行指令重排序的过程的时候,不会将语句3放到语句1、语句2前面,也不会讲语句3放到语句4、语句5后面。但是要注意语句1和语句2的顺序、语句4和语句5的顺序是不作任何保证的。

并且volatile关键字能保证,执行到语句3时,语句1和语句2必定是执行完毕了的,且语句1和语句2的执行结果对语句3、语句4、语句5是可见的。

那么我们回到前面举的一个例子:

//线程1:

context = loadContext(); //语句1

inited = true; //语句2

//线程2:

while(!inited ){

sleep()

}

doSomethingwithconfig(context);

前面举这个例子的时候,提到有可能语句2会在语句1之前执行,那么久可能导致context还没被初始化,而线程2中就使用未初始化的context去进行操作,导致程序出错。

这里如果用volatile关键字对inited变量进行修饰,就不会出现这种问题了,因为当执行到语句2时,必定能保证context已经初始化完毕。

4.volatile的原理和实现机制

前面讲述了源于volatile关键字的一些使用,下面我们来探讨一下volatile到底如何保证可见性和禁止指令重排序的。

下面这段话摘自《深入理解Java虚拟机》:

“观察加入volatile关键字和没有加入volatile关键字时所生成的汇编代码发现,加入volatile关键字时,会多出一个lock前缀指令”

lock前缀指令实际上相当于一个内存屏障(也成内存栅栏),内存屏障会提供3个功能:

1)它确保指令重排序时不会把其后面的指令排到内存屏障之前的位置,也不会把前面的指令排到内存屏障的后面;即在执行到内存屏障这句指令时,在它前面的操作已经全部完成;

2)它会强制将对缓存的修改操作立即写入主存;

3)如果是写操作,它会导致其他CPU中对应的缓存行无效。

五.使用volatile关键字的场景

The synchronized keyword prevents multiple threads from executing a piece of code at the same time, which will greatly affect the program execution efficiency. The volatile keyword has better performance than synchronized in some cases, but it should be noted that the volatile keyword cannot replace the synchronized keyword. , because the volatile keyword cannot guarantee the atomicity of the operation. Generally speaking, the following two conditions must be met to use volatile:

1) The writing operation to the variable does not depend on the current value

2) The variable is not included in an invariant with other variables

In effect, these conditions indicate that the valid values ​​that can be written to volatile variables are independent of any program state, including the variable's current state.

In fact, my understanding is that the above two conditions need to ensure that the operation is an atomic operation to ensure that programs using the volatile keyword can execute correctly during concurrency.

Here are a few scenarios where volatile is used in Java.

1. Status mark amount

volatile boolean flag = false;

while(!flag){

doSomething();

}

public void setFlag() {

flag = true;

}

volatile boolean inited = false;

//Thread 1:

context = loadContext();

inited = true;

//Thread 2:

while(!initited ){

sleep()

}

doSomethingwithconfig(context);

2.double check

class Singleton{

private volatile static Singleton instance = null;

private Singleton() {

}

public static Singleton getInstance() {

if(instance==null) {

synchronized (Singleton.class) {

if(instance==null)

instance = new Singleton();

}

}

return instance;

}

}

The above is the detailed content of How to use volatile keyword in Java. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:yisu.com. If there is any infringement, please contact admin@php.cn delete