Through the previous chapter, we learned that synchronized is a heavyweight lock. Although the JVM has made a lot of optimizations for it, the volatile introduced below is lightweight synchronized. If a variable uses volatile, it is less expensive than using synchronized because it does not cause thread context switching and scheduling. The Java Language Specification defines volatile as follows:
The Java programming language allows threads to access shared variables. In order to ensure that shared variables can be updated accurately and consistently, threads should ensure that they are individually acquired through exclusive locks. this variable.
The above is a bit convoluted. To put it simply, if a variable is modified with volatile, Java can ensure that all threads see that the value of this variable is consistent. If a thread modifies volatile If the shared variable is updated, other threads can see the update immediately. This is called thread visibility.
volatile Although it seems relatively simple, to use it is just to add volatile in front of a variable, but it is not easy to use it well (LZ admits that I still use it poorly and am still ambiguous when using it) ).
Understanding volatile is actually a bit difficult. It is related to Java's memory model, so before understanding volatile we need to first understand the concepts of Java memory model, here This is just a preliminary introduction, and LZ will introduce the Java memory model in detail later.
When the computer runs a program, each instruction is executed in the CPU, and the reading and writing of data will inevitably be involved during the execution process. We know that the data for program running is stored in the main memory. At this time, there will be a problem. Reading and writing data in the main memory is not as fast as executing instructions in the CPU. If any interaction requires dealing with the main memory, it will be greatly affected. Affects efficiency, so there is a CPU cache. The CPU cache is unique to a CPU and is only relevant to the thread running on that CPU.
Although the CPU cache solves the efficiency problem, it will bring a new problem: data consistency. When the program is running, the data required for operation will be copied to the CPU cache. When performing operations, the CPU no longer deals with the main memory, but directly reads and writes data from the cache. Only when the operation is completed, the CPU will The data will be flushed to main memory. Give a simple example:
i++
When the thread runs this code, it will first read i (i = 1) from the main memory, then copy a copy to the CPU cache, and then the CPU executes + 1 (2) operation, then write the data (2) to the cache, and finally refresh it to the main memory. In fact, there is no problem in doing this in single thread, but the problem is in multi-thread. As follows:
If there are two threads A and B both performing this operation (i++), according to our normal logical thinking, the i value in the main memory should be = 3, but is this the case? The analysis is as follows:
Two threads read the value of i (1) from the main memory into their respective caches, then thread A performs the +1 operation and writes the result to the cache, and finally writes In the main memory, at this time, the main memory i==2, thread B does the same operation, and the i in the main memory is still =2. So the final result is 2 and not 3. This phenomenon is a cache consistency problem.
There are two solutions to cache consistency:
By adding a LOCK# lock to the bus
Through caching Consistency protocol
But there is a problem with option 1. It is implemented in an exclusive way, that is, if the bus is locked with LOCK#, only one CPU can run it. Other CPUs have to be blocked and the efficiency is relatively low.
The second scheme, the cache consistency protocol (MESI protocol), ensures that the copy of the shared variables used in each cache is consistent. The core idea is as follows: when a CPU is writing data, if it is found that the variable being operated is a shared variable, other CPUs will be notified that the cache line of the variable is invalid. Therefore, when other CPUs read the variable, they will find that the variable is invalid. Invalidation will reload the data from main memory.
The above explains how to ensure data consistency from the operating system level. Let’s take a look at the Java memory model and study the Java memory model a little bit: What guarantees do we provide and what methods and mechanisms are provided in Java to enable us to ensure the correctness of program execution when performing multi-threaded programming.
In concurrent programming, we generally encounter these three basic concepts: atomicity, visibility, and orderliness. Let’s take a look at volatile
Atomicity: that is, one operation or multiple operations are either fully executed and the execution process will not be interrupted by any factors, or they are all Not executed.
Atomicity is just like the transactions in the database. They are a team, living and dying together. In fact, understanding atomicity is very simple. Let’s look at the following simple example:
i = 0; ---1 j = i ; ---2 i++; ---3 i = j + 1; ---4
上面四个操作,有哪个几个是原子操作,那几个不是?如果不是很理解,可能会认为都是原子性操作,其实只有1才是原子操作,其余均不是。
1—在Java中,对基本数据类型的变量和赋值操作都是原子性操作;
2—包含了两个操作:读取i,将i值赋值给j
3—包含了三个操作:读取i值、i + 1 、将+1结果赋值给i;
4—同三一样
在单线程环境下我们可以认为整个步骤都是原子性操作,但是在多线程环境下则不同,Java只保证了基本数据类型的变量和赋值操作才是原子性的(注:在32位的JDK环境下,对64位数据的读取不是原子性操作*,如long、double)。要想在多线程环境下保证原子性,则可以通过锁、synchronized来确保。
volatile是无法保证复合操作的原子性
可见性是指当多个线程访问同一个变量时,一个线程修改了这个变量的值,其他线程能够立即看得到修改的值。
在上面已经分析了,在多线程环境下,一个线程对共享变量的操作对其他线程是不可见的。
Java提供了volatile来保证可见性。
当一个变量被volatile修饰后,表示着线程本地内存无效,当一个线程修改共享变量后他会立即被更新到主内存中,当其他线程读取共享变量时,它会直接从主内存中读取。
当然,synchronize和锁都可以保证可见性。
有序性:即程序执行的顺序按照代码的先后顺序执行。
在Java内存模型中,为了效率是允许编译器和处理器对指令进行重排序,当然重排序它不会影响单线程的运行结果,但是对多线程会有影响。
Java提供volatile来保证一定的有序性。最著名的例子就是单例模式里面的DCL(双重检查锁)。这里LZ就不再阐述了。
JMM比较庞大,不是上面一点点就能够阐述的。上面简单地介绍都是为了volatile做铺垫的。
volatile可以保证线程可见性且提供了一定的有序性,但是无法保证原子性。在JVM底层volatile是采用“内存屏障”来实现的。
上面那段话,有两层语义
保证可见性、不保证原子性
禁止指令重排序
第一层语义就不做介绍了,下面重点介绍指令重排序。
在执行程序时为了提高性能,编译器和处理器通常会对指令做重排序:
编译器重排序。编译器在不改变单线程程序语义的前提下,可以重新安排语句的执行顺序;
处理器重排序。如果不存在数据依赖性,处理器可以改变语句对应机器指令的执行顺序;
指令重排序对单线程没有什么影响,他不会影响程序的运行结果,但是会影响多线程的正确性。既然指令重排序会影响到多线程执行的正确性,那么我们就需要禁止重排序。那么JVM是如何禁止重排序的呢?这个问题稍后回答,我们先看另一个原则happens-before,happen-before原则保证了程序的“有序性”,它规定如果两个操作的执行顺序无法从happens-before原则中推到出来,那么他们就不能保证有序性,可以随意进行重排序。其定义如下:
同一个线程中的,前面的操作 happen-before 后续的操作。(即单线程内按代码顺序执行。但是,在不影响在单线程环境执行结果的前提下,编译器和处理器可以进行重排序,这是合法的。换句话说,这一是规则无法保证编译重排和指令重排)。
监视器上的解锁操作 happen-before 其后续的加锁操作。(Synchronized 规则)
对volatile变量的写操作 happen-before 后续的读操作。(volatile 规则)
线程的start() 方法 happen-before 该线程所有的后续操作。(线程启动规则)
线程所有的操作 happen-before 其他线程在该线程上调用 join 返回成功后的操作。
如果 a happen-before b,b happen-before c,则a happen-before c(传递性)。
我们着重看第三点volatile规则:对volatile变量的写操作 happen-before 后续的读操作。为了实现volatile内存语义,JMM会重排序,其规则如下:
对happen-before原则有了稍微的了解,我们再来回答这个问题JVM是如何禁止重排序的?
Observing the assembly code generated when the volatile keyword is added and not added, it is found that when the volatile keyword is added, there will be an extra lock prefix instruction . The lock prefix instruction is actually equivalent to a memory barrier. A memory barrier is a set of processing instructions used to implement sequential restrictions on memory operations. The bottom layer of volatile is implemented through memory barriers. The following figure shows the memory barriers required to complete the above rules:
volatile Let’s temporarily analyze volatile here. The JMM system is relatively large and cannot be explained clearly in a few words. Later, we will combine JMM with another in-depth analysis of volatile.
volatile seems simple, but it is still difficult to understand it. Here is just a basic understanding of it. Volatile is slightly lighter than synchronized. It can replace synchronized in some situations, but it cannot completely replace synchronized. Volatile can only be used in certain situations. The following two conditions must be met to use it:
The writing operation of the variable does not depend on the current value;
The variable is not included in any other in the invariant of the variable.
volatile is often used in two scenarios: status mark two, double check
Zhou Zhiming: "In-depth Understanding of Java Virtual Machine"
Fang Tengfei: "The Art of Java Concurrent Programming"
Java Concurrent Programming :volatile keyword analysis
Java Concurrent Programming: The use and principle of volatile
Through the previous chapter, we learned that synchronized is a weight Level lock, although the JVM has made a lot of optimizations for it, and the volatile introduced below is lightweight synchronized. If a variable uses volatile, it is less expensive than using synchronized because it does not cause thread context switching and scheduling. The Java Language Specification defines volatile as follows:
The Java programming language allows threads to access shared variables. In order to ensure that shared variables can be updated accurately and consistently, threads should ensure that this variable is obtained individually through an exclusive lock.
The above is a bit convoluted. To put it simply, if a variable is modified with volatile, Java can ensure that all threads see that the value of this variable is consistent. If a thread modifies volatile If the shared variable is updated, other threads can see the update immediately. This is called thread visibility.
volatile Although it seems relatively simple, to use it is just to add volatile in front of a variable, but it is not easy to use it well (LZ admits that I still use it poorly and am still ambiguous when using it) ).
Understanding volatile is actually a bit difficult. It is related to Java's memory model, so before understanding volatile we need to first understand the concepts of Java memory model, here This is just a preliminary introduction, and LZ will introduce the Java memory model in detail later.
When the computer runs a program, each instruction is executed in the CPU, and the reading and writing of data will inevitably be involved during the execution process. We know that the data for program running is stored in the main memory. At this time, there will be a problem. Reading and writing data in the main memory is not as fast as executing instructions in the CPU. If any interaction requires dealing with the main memory, it will be greatly affected. Affects efficiency, so there is a CPU cache. The CPU cache is unique to a CPU and is only relevant to the thread running on that CPU.
Although the CPU cache solves the efficiency problem, it will bring a new problem: data consistency. When the program is running, the data required for operation will be copied to the CPU cache. When performing operations, the CPU no longer deals with the main memory, but directly reads and writes data from the cache. Only when the operation is completed, the CPU will The data will be flushed to main memory. Give a simple example:
i++
When the thread runs this code, it will first read i (i = 1) from the main memory, then copy a copy to the CPU cache, and then the CPU executes + 1 (2) operation, then write the data (2) to the cache, and finally refresh it to the main memory. In fact, there is no problem in doing this in single thread, but the problem is in multi-thread. As follows:
If there are two threads A and B both performing this operation (i++), according to our normal logical thinking, the i value in the main memory should be = 3, but is this the case? The analysis is as follows:
Two threads read the value of i (1) from the main memory into their respective caches, then thread A performs the +1 operation and writes the result to the cache, and finally writes In the main memory, at this time, the main memory i==2, thread B does the same operation, and the i in the main memory is still =2. So the final result is 2 and not 3. This phenomenon is a cache consistency problem.
There are two solutions to cache coherence:
By adding LOCK# to the bus
通过缓存一致性协议
但是方案1存在一个问题,它是采用一种独占的方式来实现的,即总线加LOCK#锁的话,只能有一个CPU能够运行,其他CPU都得阻塞,效率较为低下。
第二种方案,缓存一致性协议(MESI协议)它确保每个缓存中使用的共享变量的副本是一致的。其核心思想如下:当某个CPU在写数据时,如果发现操作的变量是共享变量,则会通知其他CPU告知该变量的缓存行是无效的,因此其他CPU在读取该变量时,发现其无效会重新从主存中加载数据。
上面从操作系统层次阐述了如何保证数据一致性,下面我们来看一下Java内存模型,稍微研究一下Java内存模型为我们提供了哪些保证以及在Java中提供了哪些方法和机制来让我们在进行多线程编程时能够保证程序执行的正确性。
在并发编程中我们一般都会遇到这三个基本概念:原子性、可见性、有序性。我们稍微看下volatile
原子性:即一个操作或者多个操作 要么全部执行并且执行的过程不会被任何因素打断,要么就都不执行。
原子性就像数据库里面的事务一样,他们是一个团队,同生共死。其实理解原子性非常简单,我们看下面一个简单的例子即可:
i = 0; ---1 j = i ; ---2 i++; ---3 i = j + 1; ---4
上面四个操作,有哪个几个是原子操作,那几个不是?如果不是很理解,可能会认为都是原子性操作,其实只有1才是原子操作,其余均不是。
1—在Java中,对基本数据类型的变量和赋值操作都是原子性操作;
2—包含了两个操作:读取i,将i值赋值给j
3—包含了三个操作:读取i值、i + 1 、将+1结果赋值给i;
4—同三一样
在单线程环境下我们可以认为整个步骤都是原子性操作,但是在多线程环境下则不同,Java只保证了基本数据类型的变量和赋值操作才是原子性的(注:在32位的JDK环境下,对64位数据的读取不是原子性操作*,如long、double)。要想在多线程环境下保证原子性,则可以通过锁、synchronized来确保。
volatile是无法保证复合操作的原子性
可见性是指当多个线程访问同一个变量时,一个线程修改了这个变量的值,其他线程能够立即看得到修改的值。
在上面已经分析了,在多线程环境下,一个线程对共享变量的操作对其他线程是不可见的。
Java提供了volatile来保证可见性。
当一个变量被volatile修饰后,表示着线程本地内存无效,当一个线程修改共享变量后他会立即被更新到主内存中,当其他线程读取共享变量时,它会直接从主内存中读取。
当然,synchronize和锁都可以保证可见性。
有序性:即程序执行的顺序按照代码的先后顺序执行。
在Java内存模型中,为了效率是允许编译器和处理器对指令进行重排序,当然重排序它不会影响单线程的运行结果,但是对多线程会有影响。
Java提供volatile来保证一定的有序性。最著名的例子就是单例模式里面的DCL(双重检查锁)。这里LZ就不再阐述了。
JMM比较庞大,不是上面一点点就能够阐述的。上面简单地介绍都是为了volatile做铺垫的。
volatile可以保证线程可见性且提供了一定的有序性,但是无法保证原子性。在JVM底层volatile是采用“内存屏障”来实现的。
上面那段话,有两层语义
保证可见性、不保证原子性
禁止指令重排序
第一层语义就不做介绍了,下面重点介绍指令重排序。
在执行程序时为了提高性能,编译器和处理器通常会对指令做重排序:
编译器重排序。编译器在不改变单线程程序语义的前提下,可以重新安排语句的执行顺序;
处理器重排序。如果不存在数据依赖性,处理器可以改变语句对应机器指令的执行顺序;
Instruction reordering has no impact on single threads. It will not affect the running results of the program, but it will affect the correctness of multi-threads. Since instruction reordering will affect the correctness of multi-threaded execution, we need to prohibit reordering. So how does the JVM prohibit reordering? This question will be answered later. Let’s look at another principle, happens-before. The happens-before principle ensures the “orderliness” of the program. It stipulates that if the execution order of two operations cannot be inferred from the happens-before principle, Then they cannot guarantee orderliness and can be reordered at will. Its definition is as follows:
In the same thread, the previous operation happens-before the subsequent operation. (That is, the code is executed in order within a single thread. However, the compiler and processor can reorder without affecting the execution results in a single-threaded environment, which is legal. In other words, this rule cannot guarantee Compilation rearrangement and instruction rearrangement).
The unlocking operation on the monitor happens-before its subsequent locking operation. (Synchronized rules)
Writing operations on volatile variables happen-before subsequent read operations. (volatile rule)
The start() method of the thread happens-before all subsequent operations of the thread. (Thread startup rules)
All operations of a thread happen-before other threads call join on this thread and return successfully.
If a happens-before b, b happens-before c, then a happens-before c (transitive).
Let’s focus on the third volatile rule: write operations on volatile variables happen-before subsequent read operations. In order to achieve volatile memory semantics, JMM will reorder, and the rules are as follows:
Having a little understanding of the happened-before principle, let's answer this question: How does the JVM prohibit reordering?
Observing the assembly code generated when the volatile keyword is added and not added, it is found that when the volatile keyword is added, there will be an additional lock prefix instruction. . The lock prefix instruction is actually equivalent to a memory barrier. A memory barrier is a set of processing instructions used to implement sequential restrictions on memory operations. The bottom layer of volatile is implemented through memory barriers. The following figure shows the memory barriers required to complete the above rules:
volatile Let’s temporarily analyze volatile here. The JMM system is relatively large and cannot be explained clearly in a few words. Later, we will combine JMM with another in-depth analysis of volatile.
volatile seems simple, but it is still difficult to understand it. Here is just a basic understanding of it. Volatile is slightly lighter than synchronized. It can replace synchronized in some situations, but it cannot completely replace synchronized. Volatile can only be used in certain situations. The following two conditions must be met to use it:
The writing operation of the variable does not depend on the current value;
The variable is not included in any other in the invariant of the variable.
volatile is often used in two scenarios: status mark two, double check
The above is [Fighting Java Concurrency]-- ---In-depth analysis of the implementation principle of volatile. For more related content, please pay attention to the PHP Chinese website (www.php.cn)!