Introduction to the classification of locks
Optimism Locks and pessimistic locks
A macro classification of locks is optimistic locks and pessimistic locks. Optimistic locking and pessimistic locking do not refer to a specific lock (there is no specific lock implementation name in Java called optimistic locking or pessimistic locking), but two different strategies in concurrent situations.
Optimistic Lock (Optimistic Lock) is very optimistic. Every time you go to get the data, you think that others will not modify it. So it won't be locked. But if you want to update the data, you will check whether others have modified the data between reading and updating before updating. If it has been modified, read it again, try to update again, and loop the above steps until the update is successful (of course, the thread that failed to update is also allowed to give up the update operation).
Pessimistic Lock (Pessimistic Lock) is very pessimistic. Every time you go to get the data, you think that others will modify it. So it is locked every time when data is retrieved.
In this way, when others get the data, they will be blocked until the pessimistic lock is released. The thread that wants to get the data will get the lock again, and then get the data.
Pessimistic lock blocks transactions, and optimistic lock rolls back and retries. They each have advantages and disadvantages. There is no good or bad distinction, only the difference in adapting to different scenarios. For example: Optimistic locking is suitable for situations where there are relatively few writes, that is, scenarios where conflicts rarely occur. This can save the cost of locking and increase the overall throughput of the system. However, if conflicts occur frequently, the upper-layer application will continue to retry, which will actually reduce performance, so pessimistic locking is more suitable for this scenario.
Summary: Optimistic locking is suitable for scenarios where there are relatively few writes and conflicts rarely occur; while scenarios where there is a lot of writing and many conflicts are suitable for pessimistic locking.
The basis of optimistic locking --- CAS
In the implementation of optimistic locking, we must understand a concept: CAS.
What is CAS? Compare-and-Swap, that is, compare and replace, or compare and set.
Comparison: Read a value A, before updating it to B, check whether the original value is A (has not been modified by other threads, ignore the ABA problem here).
Replacement: If yes, update A to B, end. If not, it won't be updated.
The above two steps are atomic operations, which can be understood as being completed instantly. From the perspective of the CPU, they are one-step operations.
With CAS, you can implement an optimistic lock:
public class OptimisticLockSample{ public void test(){ int data = 123; // 共享数据 // 更新数据的线程会进行如下操作 for (;;) { int oldData = data; int newData = doSomething(oldData); // 下面是模拟 CAS 更新操作,尝试更新 data 的值 if (data == oldData) { // compare data = newData; // swap break; // finish } else { // 什么都不敢,循环重试 } } } /** * * 很明显,test() 里面的代码根本不是原子性的,只是展示了下 CAS 的流程。 * 因为真正的 CAS 利用了 CPU 指令。 * * */ }
CAS is also implemented through native methods in Java.
public final class Unsafe { ... public final native boolean compareAndSwapObject(Object var1, long var2, Object var4, Object var5); public final native boolean compareAndSwapInt(Object var1, long var2, int var4, int var5); public final native boolean compareAndSwapLong(Object var1, long var2, long var4, long var6); ... }
Written above is a simple and intuitive implementation of optimistic locking (to be precise, it should be an optimistic locking process), which allows multiple threads to read at the same time (because there is no locking operation at all). If the data is updated If so,
One and only one thread can successfully update the data, causing other threads to need to roll back and try again. CAS uses CPU instructions to ensure atomicity from the hardware level to achieve a lock-like effect.
It can be seen from the entire process of optimistic locking that there are no locking and unlocking operations, so the optimistic locking strategy is also called lock-free programming. In other words, optimistic locking is not actually a "lock", it is just a CAS algorithm that retries in a loop.
Related recommendations: "
java Development TutorialSpin Locksynchronized and Lock interface
There are two ways to implement locking in Java: one is to use the synchronized keyword, and the other is to use the implementation class of the Lock interface.
I saw a good comparison in an article. It is very vivid. The synchronized keyword is like an automatic transmission, which can meet all driving needs.
But if you want to do more advanced operations, such as drifting or various advanced operations, then you need manual gear, which is the implementation class of the Lock interface.
And synchronized has become very efficient after various optimizations in each version of Java. It's just that it's not as convenient to use as the implementation class of the Lock interface.
The synchronized lock upgrade process is the core of its optimization: biased lock-> lightweight lock-> heavyweight lockclass Test{
private static final Object object = new Object();
public void test(){
synchronized(object) {
// do something
}
}
}
Use the synchronized keyword to lock When entering a certain code block, the initially locked object (that is, the object in the above code) is not a heavyweight lock, but a biased lock.
The literal meaning of a biased lock is a lock that is "biased towards the first thread to acquire it". After the thread executes the synchronized code block, it will not actively release the bias lock. When it reaches the synchronization code block for the second time, the thread will determine whether the thread holding the lock at this time is itself (the thread ID holding the lock is stored in the object header), and if so, it will continue execution normally. Since it has not been released before, there is no need to re-lock here. If one thread is using the lock from beginning to end, it is obvious that the biased lock has almost no additional overhead and the performance is extremely high.
一旦有第二个线程加入锁竞争,偏向锁转换为轻量级锁(自旋锁)。锁竞争:如果多个线程轮流获取一个锁,但是每次获取的时候都很顺利,没有发生阻塞,那么就不存在锁竞争。只有当某线程获取锁的时候,发现锁已经被占用,需要等待其释放,则说明发生了锁竞争。
在轻量级锁状态上继续锁竞争,没有抢到锁的线程进行自旋操作,即在一个循环中不停判断是否可以获取锁。获取锁的操作,就是通过 CAS 操作修改对象头里的锁标志位。先比较当前锁标志位是否为释放状态,如果是,将其设置为锁定状态,比较并设置是原子性操作,这个是 JVM 层面保证的。当前线程就算持有了锁,然后线程将当前锁的持有者信息改为自己。
假如我们获取到锁的线程操作时间很长,比如会进行复杂的计算,数据量很大的网络传输等;那么其它等待锁的线程就会进入长时间的自旋操作,这个过程是非常耗资源的。其实这时候相当于只有一个线程在有效地工作,其它的线程什么都干不了,在白白地消耗 CPU,这种现象叫做忙等。(busy-waiting)。所以如果多个线程使用独占锁,但是没有发生锁竞争,或者发生了很轻微的锁竞争,那么synchronized 就是轻量级锁,允许短时间的忙等现象。这是一种择中的想法,短时间的忙等,换取线程在用户态和内核态之间切换的开销。
显然,忙等是有限度的(JVM 有一个计数器记录自旋次数,默认允许循环 10 次,可以通过虚拟机参数更改)。如果锁竞争情况严重,
达到某个最大自旋次数的线程,会将轻量级锁升级为重量级锁(依然是通过 CAS 修改锁标志位,但不修改持有锁的线程 ID)。当后续线程尝试获取锁时,发现被占用的锁是重量级锁,则直接将自己挂起(而不是上面说的忙等,即不会自旋),等待释放锁的线程去唤醒。在 JDK1.6 之前, synchronized直接加重量级锁,很明显现在通过一系列的优化过后,性能明显得到了提升。
JVM 中,synchronized 锁只能按照偏向锁、轻量级锁、重量级锁的顺序逐渐升级(也有把这个称为锁膨胀的过程),不允许降级。
可重入锁(递归锁)
可重入锁的字面意思是"可以重新进入的锁",即允许同一个线程多次获取同一把锁。比如一个递归函数里有加锁操作,递归函数里这个锁会阻塞自己么?
如果不会,那么这个锁就叫可重入锁(因为这个原因可重入锁也叫做递归锁)。
Java 中以 Reentrant 开头命名的锁都是可重入锁,而且 JDK 提供的所有现成 Lock 的实现类,包括 synchronized 关键字锁都是可重入的。
如果真的需要不可重入锁,那么就需要自己去实现了,获取去网上搜索一下,有很多,自己实现起来也很简单。
如果不是可重入锁,在递归函数中就会造成死锁,所以 Java 中的锁基本都是可重入锁,不可重入锁的意义不是很大,我暂时没有想到什么场景下会用到;
注意:有想到需要不可重入锁场景的小伙伴们可以留言一起探讨。
下图展示一下 Lock 的相关实现类:
公平锁和非公平锁
如果多个线程申请一把公平锁,那么获得锁的线程释放锁的时候,先申请的先得到,很公平。如果是非公平锁,后申请的线程可能先获得锁,是随机获取还是其它方式,都是根据实现算法而定的。
对 ReentrantLock 类来说,通过构造函数可以指定该锁是否是公平锁,默认是非公平锁。因为在大多数情况下,非公平锁的吞吐量比公平锁的大,如果没有特殊要求,优先考虑使用非公平锁。
而对于 synchronized 锁而言,它只能是一种非公平锁,没有任何方式使其变成公平锁。这也是 ReentrantLock 相对于 synchronized 锁的一个优点,更加的灵活。
以下是 ReentrantLock 构造器代码:
/** * Creates an instance of {@code ReentrantLock} with the * given fairness policy. * * @param fair {@code true} if this lock should use a fair ordering policy */ public ReentrantLock(boolean fair) { sync = fair ? new FairSync() : new NonfairSync(); }
ReentrantLock 内部实现了 FairSync 和 NonfairSync 两个内部类来实现公平锁和非公平锁。
可中断锁
字面意思是"可以响应中断的锁"。
首先,我们需要理解的是什么是中断。 Java 中并没有提供任何可以直接中断线程的方法,只提供了中断机制。那么何为中断机制呢?
线程 A 向线程 B 发出"请你停止运行"的请求,就是调用 Thread.interrupt() 的方法(当然线程 B 本身也可以给自己发送中断请求,
即 Thread.currentThread().interrupt()),但线程 B 并不会立即停止运行,而是自行选择在合适的时间点以自己的方式响应中断,也可以直接忽略此中断。也就是说,Java 的中断不能直接终止线程,只是设置了状态为响应中断的状态,需要被中断的线程自己决定怎么处理。这就像在读书的时候,老师在晚自习时叫学生自己复习功课,但学生是否复习功课,怎么复习功课则完全取决于学生自己。
回到锁的分析上来,如果线程 A 持有锁,线程 B 等待持获取该锁。由于线程 A 持有锁的时间过长,线程 B 不想继续等了,我们可以让线程 B 中断。
自己或者在别的线程里面中断 B,这种就是 可中段锁。
在 Java 中, synchronized 锁是不可中断锁,而 Lock 的实现类都是 可中断锁。从而可以看出 JDK 自己实现的 Lock 锁更加的灵活,这也就是有了 synchronized 锁后,为什么还要实现那么些 Lock 的实现类。
Lock 接口的相关定义:
public interface Lock { void lock(); void lockInterruptibly() throws InterruptedException; boolean tryLock(); boolean tryLock(long time, TimeUnit unit) throws InterruptedException; void unlock(); Condition newCondition(); }
其中 lockInterruptibly 就是获取可中断锁。
共享锁
字面意思是多个线程可以共享一个锁。一般用共享锁都是在读数据的时候,比如我们可以允许 10 个线程同时读取一份共享数据,这时候我们可以设置一个有 10 个凭证的共享锁。
在 Java 中,也有具体的共享锁实现类,比如 Semaphore。
互斥锁
字面意思是线程之间互相排斥的锁,也就是表明锁只能被一个线程拥有。
在 Java 中, ReentrantLock、synchronized 锁都是互斥锁。
读写锁
读写锁其实是一对锁,一个读锁(共享锁)和一个写锁(互斥锁、排他锁)。
在 Java 中, ReadWriteLock 接口只规定了两个方法,一个返回读锁,一个返回写锁。
public interface ReadWriteLock { /** * Returns the lock used for reading. * * @return the lock used for reading */ Lock readLock(); /** * Returns the lock used for writing. * * @return the lock used for writing */ Lock writeLock(); }
文章前面讲过[乐观锁策略](#乐观锁的基础 --- CAS),所有线程可以随时读,仅在写之前判断值有没有被更改。
读写锁其实做的事情是一样的,但是策略稍有不同。很多情况下,线程知道自己读取数据后,是否是为了更改它。那么为何不在加锁的时候直接明确。
这一点呢?如果我读取值是为了更新它(SQL 的 for update 就是这个意思),那么加锁的时候直接加写锁,我持有写锁的时候,别的线程。
无论是读还是写都需要等待;如果读取数据仅仅是为了前端展示,那么加锁时就明确加一个读锁,其它线程如果也要加读锁,不需要等待,可以直接获取(读锁计数器加 1)。
虽然读写锁感觉与乐观锁有点像,但是读写锁是悲观锁策略。因为读写锁并没有在更新前判断值有没有被修改过,而是在加锁前决定应该用读锁还是写锁。乐观锁特指无锁编程。
JDK 内部提供了一个唯一一个 ReadWriteLock 接口实现类是 ReentrantReadWriteLock。通过名字可以看到该锁提供了读写锁,并且也是可重入锁。
总结
Java 中使用的各种锁基本都是悲观锁,那么 Java 中有乐观锁么?结果是肯定的,那就是 java.util.concurrent.atomic 下面的原子类都是通过乐观锁实现的。如下:
public final int getAndAddInt(Object var1, long var2, int var4) { int var5; do { var5 = this.getIntVolatile(var1, var2); } while(!this.compareAndSwapInt(var1, var2, var5, var5 + var4)); return var5; }
通过上述源码可以发现,在一个循环里面不断 CAS,直到成功为止。
参数介绍
-XX:-UseBiasedLocking=false 关闭偏向锁 JDK1.6 -XX:+UseSpinning 开启自旋锁 -XX:PreBlockSpin=10 设置自旋次数 JDK1.7 之后 去掉此参数,由 JVM 控制
The above is the detailed content of Inventory of various locks in Java. For more information, please follow other related articles on the PHP Chinese website!