Introduction to 15 types of locks in Java
In reading many concurrency articles, various locks will be mentioned, such as fair locks, optimistic locks, etc. This article introduces the classification of various locks. The content of the introduction is as follows:
Fair lock / unfair lock
Reentrant lock / non-reentrant lock
Exclusive lock / shared lock
Mutex lock / read-write lock
Optimistic lock / pessimistic lock
Segment lock
Bias lock / lightweight lock / heavyweight lock
Spin lock
The above are many lock nouns. These classifications do not all refer to the status of the lock. Some refer to the characteristics of the lock, and some refer to the design of the lock. The content summarized below is a certain explanation of each lock noun.
Fair lock / unfair lock
Fair lock
Fair lock means that multiple threads acquire locks in the order in which they apply for locks.
Unfair lock
Unfair lock means that the order in which multiple threads acquire locks is not in the order in which they apply for locks. It is possible that the thread that applies later acquires the lock before the thread that applies first. It is possible that it will cause priority inversion or starvation.
For Java ReentrantLock, you specify whether the lock is a fair lock through the constructor, and the default is an unfair lock. The advantage of unfair locks is that the throughput is greater than fair locks. For Synchronized, it is also an unfair lock. Since it does not implement thread scheduling through AQS like ReentrantLock, there is no way to turn it into a fair lock.
Reentrant lock / non-reentrant lock
Reentrant lock
A reentrant lock in a broad sense refers to a lock that can be called repeatedly and recursively. After the lock is used in the outer layer, it can still be used in the inner layer without deadlock (provided that it is the same object or class). Such a lock It's called a reentrant lock. ReentrantLock and synchronized are both reentrant locks
synchronized void setA() throws Exception{
Thread.sleep(1000);
setB();
}
synchronized void setB() throws Exception{
Thread.sleep(1000);
}
The above code is a feature of a reentrant lock. If it is not a reentrant lock, setB may not be executed by the current thread, which may cause a deadlock.
Non-reentrant lock
Non-reentrant locks, contrary to reentrant locks, cannot be called recursively, and deadlock will occur if recursive calls are made. I saw a classic explanation of using a spin lock to simulate a non-reentrant lock. The code is as follows
import java.util.concurrent.atomic.AtomicReference;
public class UnreentrantLock {
private AtomicReference
public void lock() {
Thread current = Thread.currentThread();
//This sentence is a very classic "spin" syntax, there is also
in AtomicInteger for (;;) {
if (!owner.compareAndSet(null, current)) {
return;
}
}
}
public void unlock() {
Thread current = Thread.currentThread();
owner.compareAndSet(current, null);
}
}
The code is also relatively simple. It uses atomic references to store threads. The same thread calls the lock() method twice. If unlock() is not executed to release the lock, a deadlock will occur when the spin is called for the second time. This lock is not It is reentrant, but in fact the same thread does not have to release the lock and acquire the lock every time. Such scheduling switching is very resource-consuming.
Turn it into a reentrant lock:
import java.util.concurrent.atomic.AtomicReference;
public class UnreentrantLock {
private AtomicReference
private int state = 0;
public void lock() {
Thread current = Thread.currentThread();
if (current == owner.get()) {
state ;
return;
}
//This sentence is a very classic "spin" syntax, there is also
in AtomicInteger for (;;) {
if (!owner.compareAndSet(null, current)) {
return;
}
}
}
public void unlock() {
Thread current = Thread.currentThread();
if (current == owner.get()) {
if (state != 0) {
state--;
} else {
owner.compareAndSet(current, null);
}
}
}
}
Before executing each operation, determine whether the current lock holder is the current object and use state counting instead of releasing the lock every time.
Reentrant lock implementation in ReentrantLock
Here’s a look at the lock acquisition method for unfair locks:
final boolean nonfairTryAcquire(int acquires) {
final Thread current = Thread.currentThread();
int c = getState();
if (c == 0) {
if (compareAndSetState(0, acquires)) {
setExclusiveOwnerThread(current);
return true;
}
}
//right here
else if (current == getExclusiveOwnerThread()) {
int nextc = c acquires;
if (nextc < 0) // overflow
Throw new Error("Maximum lock count exceeded");
setState(nextc);
return true;
}
return false;
}
A private volatile int state is maintained in AQS to count the number of reentries and avoid frequent hold and release operations, which not only improves efficiency but also avoids deadlocks.
Exclusive lock / shared lock
Exclusive locks and shared locks. When you read ReeReentrantLock and ReentrantReadWriteLock under the C.U.T package, you will find that one of them is an exclusive lock and the other is a shared lock.
Exclusive lock: This lock can only be held by one thread at a time.
Shared lock: This lock can be shared by multiple threads. A typical example is the read lock in ReentrantReadWriteLock. Its read lock can be shared, but its write lock can only be exclusive at a time.
In addition, the sharing of read locks can ensure that concurrent reading is very efficient, but reading and writing, writing and reading are mutually exclusive.
Exclusive locks and shared locks are also implemented through AQS, and different methods are implemented to achieve exclusive or shared locks. For Synchronized, of course it is an exclusive lock.
Mutex lock / read-write lock
Mutex lock
Lock the shared resource before accessing it, and unlock it after the access is completed. After locking, any other thread trying to lock again will be blocked until the current process unlocks.
If more than one thread is blocked when unlocking, then all threads on the lock will be programmed into the ready state. The first thread that becomes ready will perform the locking operation, and the other threads will wait again. In this way, only one thread can access the resource protected by the mutex
Read-write lock
The read-write lock is both a mutual exclusion lock and a shared lock. The read mode is shared, and the write mode is mutually exclusive (exclusive lock).
There are three states of read-write lock: read-locked state, write-locked state and non-locked state
The specific implementation of read-write lock in Java is ReadWriteLock
Only one thread can hold a read-write lock in write mode at a time, but multiple threads can hold a read-write lock in read mode at the same time. Only one thread can occupy the write state lock, but multiple threads can occupy the read state lock at the same time, which is why it can achieve high concurrency. When it is under a write state lock, any thread that wants to obtain the lock will be blocked until the write state lock is released; if it is under a read state lock, other threads are allowed to obtain its read state lock, but are not allowed to obtain it. write status lock until the read status lock of all threads is released; in order to prevent threads that want to try to write operations from getting the write status lock, when the read-write lock senses that a thread wants to obtain the write status lock, it will Block all subsequent threads that want to obtain the read status lock. Therefore, read-write locks are very suitable for situations where there are far more read operations on resources than write operations.
Optimistic lock / pessimistic lock
Pessimistic lock
Always assume the worst case scenario. Every time you go to get the data, you think that others will modify it, so you will lock it every time you get the data. In this way, if others want to get the data, they will block until they get the lock (shared resource It is only used by one thread at a time, other threads are blocked, and the resources are transferred to other threads after use). Many such lock mechanisms are used in traditional relational databases, such as row locks, table locks, read locks, write locks, etc., which are all locked before operations. Exclusive locks such as synchronized and ReentrantLock in Java are the implementation of the pessimistic lock idea.
Optimistic lock
Always assume the best situation. Every time you go to get the data, you think that others will not modify it, so it will not be locked. However, when updating, you will judge whether others have updated the data during this period. You can use the version Number mechanism and CAS algorithm implementation. Optimistic locking is suitable for multi-read application types, which can improve throughput. The write_condition mechanism provided by the database is actually an optimistic lock. In Java, the atomic variable class under the java.util.concurrent.atomic package is implemented using CAS, an implementation method of optimistic locking.
Segment lock
Segmented lock is actually a lock design, not a specific lock. For ConcurrentHashMap, its concurrency implementation is to achieve efficient concurrent operations in the form of segmented locks.
The locking mechanism of concurrent container classes is based on segmented locks with smaller granularity. Segmented locks are also one of the important means to improve the performance of multi-concurrent programs.
In concurrent programs, serial operations will reduce scalability, and context switching will also reduce performance. These two problems will be caused when competition occurs on the lock. When using an exclusive lock to protect restricted resources, it is basically a serial method - only one thread can access it at a time. So the biggest threat to scalability is exclusive locks.
We generally have three ways to reduce the degree of lock competition: 1. Reduce the lock holding time 2. Reduce the frequency of lock requests 3. Use exclusive locks with coordination mechanisms. These mechanisms allow higher concurrency.
In some cases we can further extend the lock decomposition technique to decompose the lock on a set of independent objects, which becomes a segmented lock.
In fact, to put it simply:
There are multiple locks in the container, and each lock is used to lock part of the data in the container. Then when multiple threads access data in different data segments in the container, there will be no lock competition between threads, which can effectively improve the efficiency of concurrent access. , this is the lock segmentation technology used by ConcurrentHashMap. First, the data is divided into segments for storage, and then a lock is assigned to each segment of data. When a thread occupies the lock to access one segment of data, the data of other segments can also be accessed. accessed by other threads.
For example: An array containing 16 locks is used in ConcurrentHashMap. Each lock protects 1/16 of all hash buckets, and the Nth hash bucket is protected by the (N mod 16)th lock. Assuming that a reasonable hashing algorithm is used to distribute the keys evenly, this can roughly reduce lock requests to 1/16. It is this technology that allows ConcurrentHashMap to support up to 16 concurrent writing threads.
Bias lock / lightweight lock / heavyweight lock
Lock status:
Lock-free status
Bias lock status
Lightweight lock status
Heavyweight lock status
The status of the lock is indicated by the fields in the object header of the object monitor. The four states will gradually escalate with the competition, and it is an irreversible process, that is, it cannot be downgraded. These four states are not locks in the Java language, but optimizations made by the Jvm to improve the efficiency of lock acquisition and release (when using synchronized).
Bias lock
Biased lock means that a piece of synchronization code is always accessed by one thread, then the thread will automatically acquire the lock. Reduce the cost of acquiring locks.
Lightweight
A lightweight lock means that when the lock is a biased lock and is accessed by another thread, the biased lock will be upgraded to a lightweight lock. Other threads will try to acquire the lock through spin, which will not block and improve performance. .
Heavyweight lock
A heavyweight lock means that when the lock is a lightweight lock, although another thread is spinning, the spin will not continue forever. When it spins a certain number of times and the lock has not been acquired, it will enter blocking. , the lock expands into a heavyweight lock. Heavyweight locks will block other application threads and reduce performance.
Spin lock
We know that the CAS algorithm is an implementation method of optimistic locking. The CAS algorithm also involves spin locks, so here I will tell you what a spin lock is.
A brief review of the CAS algorithm
CAS is the English word Compare and Swap (Compare and Swap), which is a famous lock-free algorithm. Lock-free programming means synchronizing variables between multiple threads without using locks, that is, synchronizing variables without threads being blocked, so it is also called Non-blocking Synchronization. The CAS algorithm involves three operands
The memory value that needs to be read and written V
The value to be compared A
The new value to be written B
When updating a variable, only when the expected value A of the variable is the same as the actual value in memory address V, the value corresponding to memory address V will be modified to B, otherwise no operation will be performed. Generally, it is a spin operation, that is, constant retries.
What is a spin lock?
Spin lock (spinlock): When a thread acquires a lock, if the lock has been acquired by another thread, then the thread will wait in a loop, and then continue to judge whether the lock can be successfully acquired until the lock is acquired. Exit the loop.
It is a lock mechanism proposed to protect shared resources. In fact, spin locks are similar to mutex locks. They are both designed to solve the mutually exclusive use of a certain resource. Whether it is a mutex lock or a spin lock, there can be at most one holder at any time. In other words, at most one execution unit can obtain the lock at any time. But the two have slightly different scheduling mechanisms. For mutex locks, if the resource is already occupied, the resource applicant can only enter the sleep state. However, the spin lock will not cause the caller to sleep. If the spin lock has been held by another execution unit, the caller will keep looping there to see if the holder of the spin lock has released the lock. The word "spin" That's why it got its name.
How to implement spin lock in Java?
The following is a simple example:
public class SpinLock {
private AtomicReference
public void lock() {
Thread current = Thread.currentThread();
//Use CAS
while (!cas.compareAndSet(null, current)) {
// DO nothing
}
}
public void unlock() {
Thread current = Thread.currentThread();
cas.compareAndSet(current, null);
}
}
The lock() method uses CAS. When the first thread A acquires the lock, it can acquire it successfully and will not enter the while loop. If thread A does not release the lock at this time, another thread B will acquire the lock again. At this time Since the CAS is not satisfied, it will enter a while loop and continuously judge whether the CAS is satisfied until the A thread calls the unlock method to release the lock.
Problems with spin locks
1. If a thread holds the lock for too long, it will cause other threads waiting to acquire the lock to enter a loop and consume CPU. Improper use can cause extremely high CPU usage. 2. The spin lock implemented in Java above is not fair, that is, it cannot satisfy the thread with the longest waiting time to acquire the lock first. Unfair locks will cause "thread starvation" problems.
Advantages of spin lock
1. The spin lock will not cause the thread state to switch, and it will always be in the user state, that is, the thread is always active; it will not cause the thread to enter the blocking state, reducing unnecessary context switching, and the execution speed will be fast. 2. Non-spin When the lock cannot be acquired, the lock will enter the blocking state and enter the kernel state. When the lock is acquired, it needs to be restored from the kernel state, which requires thread context switching. (After the thread is blocked, it enters the kernel (Linux) scheduling state. This will cause the system to switch back and forth between user mode and kernel mode, seriously affecting the performance of the lock)
Reentrant spin locks and non-reentrant spin locks
A careful analysis of the code at the beginning of the article shows that it does not support reentrancy, that is, when a thread has acquired the lock for the first time, it reacquires the lock again before the lock is released. It cannot be obtained successfully the second time. Since CAS is not satisfied, the second acquisition will enter a while loop and wait. If it is a reentrant lock, it should be successfully acquired the second time.
Moreover, even if it can be acquired successfully for the second time, when the lock is released for the first time, the lock acquired for the second time will also be released, which is unreasonable.
In order to implement a reentrant lock, we need to introduce a counter to record the number of threads that acquire the lock.
public class ReentrantSpinLock {
private AtomicReference
private int count;
public void lock() {
Thread current = Thread.currentThread();
if (current == cas.get()) { // If the current thread has acquired the lock, increase the number of threads by one, and then return
Count ;
return;
}
//If the lock is not obtained, spin through CAS
while (!cas.compareAndSet(null, current)) {
// DO nothing
}
}
public void unlock() {
Thread cur = Thread.currentThread();
if (cur == cas.get()) {
if (count > 0) {//If greater than 0, it means that the current thread has acquired the lock multiple times, and releasing the lock is simulated by decrementing the count by one
Count--;
} else {// If count==0, the lock can be released, thus ensuring that the number of times the lock is acquired is consistent with the number of times the lock is released.
cas.compareAndSet(cur, null);
}
}
}
}
Spin lock and mutex lock
Both spin locks and mutex locks are mechanisms to protect resource sharing.
Whether it is a spin lock or a mutex lock, there can be at most one holder at any time.
If the thread that acquires the mutex lock is already occupied, the thread will enter sleep state; the thread that acquires the spin lock will not sleep, but will wait in a loop for the lock to be released.
The above is the detailed content of What are the various locks in Java?. For more information, please follow other related articles on the PHP Chinese website!