Home >Java >javaTutorial >What are the implementation methods of locking in Java?

What are the implementation methods of locking in Java?

WBOY
WBOYforward
2023-05-12 08:37:051636browse

1. Pessimistic lock

As its name suggests, it refers to a conservative attitude towards data modification and the belief that others will also modify the data. Therefore, when operating data, the data will be locked until the operation is completed. In most cases, pessimistic locking relies on the locking mechanism of the database to ensure maximum exclusivity of operations. If the locking time is too long, other users will not be able to access it for a long time, which will affect the concurrent access of the program. At the same time, this will also have a great impact on database performance overhead. Especially for long transactions, such overhead is often unbearable.

If it is a stand-alone system, we can use JAVA's own synchronized keyword to lock resources by adding it to the method or synchronized block. If it is a distributed system, we can use the database's own lock mechanism to achieve this. .

select * from 表名 where id= #{id} for update

When using pessimistic locks, we must pay attention to the lock level. When MySQL innodb locks, row locks will only be used if the primary key or (index field) is explicitly specified; otherwise, table locks will be executed. Lock the entire table and performance will be poor. When using pessimistic locking, we must turn off the autocommit attribute of the MySQL database because mysql uses autocommit mode by default. Pessimistic locking is suitable for scenarios where there are many writes, and the concurrency performance requirements are not high.

2. Optimistic lock

Optimistic lock, you can guess the approximate meaning from the literal meaning. It is very optimistic when operating data, thinking that others will not modify the data at the same time, so the optimistic lock will not be activated. The lock will only formally detect whether the data conflicts or not when an update is submitted. If a conflict is found, an error message is returned and the user can decide what to do, fail-fast mechanism. Otherwise, perform this operation.

It is divided into three stages: data reading, writing verification, and data writing.

If it is a stand-alone system, we can implement it based on JAVA's CAS. CAS is an atomic operation that is implemented with the help of hardware comparison and exchange.

If it is a distributed system, we can add a version number field to the database table, such as version.

update 表 
set ... , version = version +1 
where id= #{id} and version = #{version}

Before operation, read the version number of the record first. When updating, compare the version numbers through SQL statements to see if they are consistent. If consistent, update the data. Otherwise, the version will be read again and the above operation will be retried.

3. Distributed locks

synchronized, ReentrantLock, etc. in JAVA all solve the resource mutual exclusion problem of single-machine deployment of single applications. With the rapid development of business, when a single application evolves into a distributed cluster, multi-threads and multi-processes are distributed on different machines, and the original single-machine concurrency control lock strategy becomes invalid

At this time we need to introduce distributed Locks solve cross-machine mutual exclusion mechanisms to control access to shared resources.

What conditions are required for distributed locks:

  • The same resource mutual exclusion function as the stand-alone system, which is the basis of the lock

  • High-performance lock acquisition and release

  • High availability

  • Reentrancy

  • There is a lock failure mechanism to prevent deadlock

  • Non-blocking, no matter whether the lock is obtained or not, it must be able to return quickly

There are many ways to implement it Diverse, based on database, Redis, and Zookeeper, etc. Here are the mainstream Redis-based implementation methods:

Locking

SET key unique_value  [EX seconds] [PX milliseconds] [NX|XX]

Through atomic commands, if the execution is successful and 1 is returned, it means adding Lock successful. Note: unique_value is a unique identifier generated by the client to distinguish lock operations from different clients. Pay special attention to unlocking. First determine whether unique_value is a locked client. If so, unlocking and deletion are allowed. After all, we cannot delete locks added by other clients.

Unlocking: Unlocking has two command operations, which require the use of Lua scripts to ensure atomicity.

// 先比较 unique_value 是否相等,避免锁的误释放
if redis.call("get",KEYS[1]) == ARGV[1] then
    return redis.call("del",KEYS[1])
else
    return 0
end

With the high performance of Redis, Redis implements distributed locks which is also the current mainstream implementation method. But everything has pros and cons. If the locked server goes down and the slave node has not had time to back up the data, other clients can also obtain the lock.

In order to solve this problem, Redis officially designed a distributed lock Redlock.

Basic idea: Let the client request locks with multiple independent Redis nodes in parallel. If the locking operation can be successfully completed on more than half of the nodes, then we consider that the client has successfully obtained the distribution. lock, otherwise the lock will fail.

4. Reentrant lock

Reentrant lock, also called recursive lock, means that when the same thread calls the outer method to acquire the lock, and then enters the inner method, it will Automatically acquire locks.

There is a counter inside the object lock or class lock. Each time a thread obtains the lock, the counter is 1; when it is unlocked, the counter is -1.

The number of locks corresponds to the number of unlocks. Locks and unlocks appear in pairs.

ReentrantLock and synchronized in Java are both reentrant locks. One benefit of reentrant locks is that deadlocks can be avoided to a certain extent.

5、自旋锁

自旋锁是采用让当前线程不停地在循环体内执行,当循环的条件被其他线程改变时才能进入临界区。自旋锁只是将当前线程不停地执行循环体,不进行线程状态的改变,所以响应速度更快。但当线程数不断增加时,性能下降明显,因为每个线程都需要执行,会占用CPU时间片。如果线程竞争不激烈,并且保持锁的时间段。适合使用自旋锁。

自旋锁缺点:

  • 可能引发死锁。

  • 可能占用 CPU 的时间过长。

我们可以设置一个 循环时间 或 循环次数,超出阈值时,让线程进入阻塞状态,防止线程长时间占用 CPU 资源。JUC 并发包中的 CAS 就是采用自旋锁,compareAndSet 是CAS操作的核心,底层利用Unsafe对象实现的。

public final int getAndAddInt(Object var1, long var2, int var4) {
    int var5;
    do {
        var5 = this.getIntVolatile(var1, var2);
    } while(!this.compareAndSwapInt(var1, var2, var5, var5 + var4));
    return var5;
}

如果内存中 var1 对象的var2字段值等于预期的 var5,则将该位置更新为新值(var5 + var4),否则不进行任何操作,一直重试,直到操作成功为止。

CAS 包含了Compare和Swap 两个操作,如何保证原子性呢?CAS 是由 CPU 支持的原子操作,其原子性是在硬件层面进行控制。

特别注意,CAS 可能导致 ABA 问题,我们可以引入递增版本号来解决。

6、独享锁

独享锁,也有人叫它排他锁。无论读操作还是写操作,只能有一个线程获得锁,其他线程处于阻塞状态。

缺点:读操作并不会修改数据,而且大部分的系统都是 读多写少,如果读读之间互斥,大大降低系统的性能。下面的 共享锁 会解决这个问题。

像Java中的 ReentrantLock 和 synchronized 都是独享锁。

7、共享锁

共享锁是指允许多个线程同时持有锁,一般用在读锁上。读锁的共享锁可保证并发读是非常高效的。读写,写读 ,写写的则是互斥的。独享锁与共享锁也是通过AQS来实现的,通过实现不同的方法,来实现独享或者共享。

ReentrantReadWriteLock,其读锁是共享锁,其写锁是独享锁。

8、读锁/写锁

如果对某个资源是读操作,那多个线程之间并不会相互影响,可以通过添加读锁实现共享。如果有修改动作,为了保证数据的并发安全,此时只能有一个线程获得锁,我们称之为 写锁。读读是共享的;而 读写、写读 、写写 则是互斥的。

像 Java中的 ReentrantReadWriteLock 就是一种 读写锁。

9、公平锁/非公平锁

公平锁:多个线程按照申请锁的顺序去获得锁,所有线程都在队列里排队,先来先获取的公平性原则。

优点:所有的线程都能得到资源,不会饿死在队列中。

缺点:吞吐量会下降很多,队列里面除了第一个线程,其他的线程都会阻塞,CPU 唤醒下一个阻塞线程有系统开销。

What are the implementation methods of locking in Java?

非公平锁:多个线程不按照申请锁的顺序去获得锁,而是同时以插队方式直接尝试获取锁,获取不到(插队失败),会进入队列等待(失败则乖乖排队),如果能获取到(插队成功),就直接获取到锁。

优点:可以减少 CPU 唤醒线程的开销,整体的吞吐效率会高点。

缺点:可能导致队列中排队的线程一直获取不到锁或者长时间获取不到锁,活活饿死。

Java 多线程并发操作,我们操作锁大多时候都是基于 Sync 本身去实现的,而 Sync 本身却是 ReentrantLock 的一个内部类,Sync 继承 AbstractQueuedSynchronizer。

像 ReentrantLock 默认是非公平锁,我们可以在构造函数中传入 true,来创建公平锁。

public ReentrantLock(boolean fair) {
    sync = fair ? new FairSync() : new NonfairSync();
}

10、可中断锁/不可中断锁

可中断锁:指一个线程因为没有获得锁在阻塞等待过程中,可以中断自己阻塞的状态。不可中断锁:恰恰相反,如果锁被其他线程获取后,当前线程只能阻塞等待。如果持有锁的线程一直不释放锁,那其他想获取锁的线程就会一直阻塞。

内置锁 synchronized 是不可中断锁,而 ReentrantLock 是可中断锁。

ReentrantLock获取锁定有三种方式:

  • lock(), 如果获取了锁立即返回,如果别的线程持有锁,当前线程则一直处于阻塞状态,直到该线程获取锁。

  • tryLock(), 如果获取了锁立即返回true,如果别的线程正持有锁,立即返回false。

  • tryLock(long timeout,TimeUnit unit), if the lock is obtained, it will return true immediately. If another thread is holding the lock, it will wait for the time given by the parameter. During the waiting process, if If the lock is acquired, it returns true. If the wait times out, it returns false.

  • lockInterruptibly(), returns immediately if the lock is acquired; if the lock is not acquired, the thread is blocked until the lock is acquired or the thread is interrupted by another thread.

11. Segmented lock

Segmented lock is actually a kind of lock design. The purpose is to refine the granularity of the lock, not a specific kind of lock. For ConcurrentHashMap, its concurrency implementation is to achieve efficient concurrent operations in the form of segmented locks.

The segment lock in ConcurrentHashMap is called Segment, which is a structure similar to HashMap (the implementation of HashMap in JDK7), that is, it has an Entry array internally, and each element in the array is a linked list; at the same time Another ReentrantLock (Segment inherits ReentrantLock).

When you need to put an element, you do not lock the entire HashMap, but first know which segment to put it in through the hashcode, and then lock this segment, so when multi-threaded put , parallel insertion is supported as long as they are not placed in the same segment.

12. Lock upgrade (no lock | biased lock | lightweight lock | heavyweight lock)

Before JDK 1.6, synchronized was still a heavyweight lock with relatively low efficiency. However, after JDK 1.6, the JVM optimized synchronized in order to improve the efficiency of lock acquisition and release, and introduced biased locks and lightweight locks. From then on, there are four lock states: no lock, biased lock, and lightweight Level lock, heavyweight lock. These four states will gradually upgrade with competition and cannot be downgraded.

What are the implementation methods of locking in Java?

Lock-free

Lock-free does not lock resources. All threads can access and modify the same resource, but only one thread can access it at the same time. Successfully modified. This is what we often call optimistic locking.

Biased lock

is biased towards the first thread that accesses the lock. When the synchronized code block is executed for the first time, the lock flag in the object header is modified through CAS, and the lock object becomes a biased lock.

When a thread accesses a synchronized code block and acquires a lock, the thread ID of the lock bias will be stored in Mark Word. When the thread enters and exits the synchronized block, it no longer locks and unlocks through CAS operations, but detects whether Mark Word stores a bias lock pointing to the current thread. The acquisition and release of lightweight locks rely on multiple CAS atomic instructions, while biased locks only need to rely on one CAS atomic instruction when replacing ThreadID.

After executing the synchronization code block, the thread will not actively release the bias lock. When the thread executes the synchronization code block for the second time, the thread will determine whether the thread holding the lock at this time is itself (the ID of the thread holding the lock is also in the object header), and if so, it will execute normally. Since the lock has not been released before, there is no need to re-lock here. The biased lock has almost no additional overhead and has extremely high performance.

Bias lock will only release the lock when other threads try to compete for the bias lock. The thread holding the bias lock will not actively release the bias lock. Regarding the revocation of the biased lock, you need to wait for the global safety point, that is, when no bytecode is being executed at a certain point in time, it will first pause the thread that owns the biased lock, and then determine whether the lock object is locked. If the thread is not active, set the object header to a lock-free state, cancel the biased lock, and return to a lock-free (flag bit is 01) or lightweight lock (flag bit is 00) state.

Biased locking means that when a piece of synchronization code has been accessed by the same thread, that is, when there is no competition among multiple threads, then the thread will automatically acquire the lock on subsequent accesses, thereby reducing the need to acquire locks. consumption brought about.

Lightweight lock

The current lock is a bias lock. If multiple threads compete for the lock at the same time, the bias lock will be upgraded to a lightweight lock. Lightweight locks believe that although competition exists, ideally the degree of competition is very low, and the lock is acquired through spin.

There are two situations when lightweight locks are acquired:

  • When the bias lock function is turned off.

  • Multiple threads competing for the bias lock cause the bias lock to be upgraded to a lightweight lock. Once a second thread joins the lock competition, the biased lock is upgraded to a lightweight lock (spin lock).

Continue lock competition in the lightweight lock state. Threads that have not grabbed the lock will spin and continuously loop to determine whether the lock can be successfully acquired. The operation of acquiring the lock is actually to modify the lock flag in the object header through CAS. First compare whether the current lock flag is "released", and if so, set it to "locked". This process is atomic. If the lock is grabbed, then the thread modifies the current lock holder information to itself.

重量级锁

如果线程的竞争很激励,线程的自旋超过了一定次数(默认循环10次,可以通过虚拟机参数更改),将轻量级锁升级为重量级锁(依然是 CAS  修改锁标志位,但不修改持有锁的线程ID),当后续线程尝试获取锁时,发现被占用的锁是重量级锁,则直接将自己挂起(而不是忙等),等待将来被唤醒。

重量级锁是指当有一个线程获取锁之后,其余所有等待获取该锁的线程都会处于阻塞状态。简言之,就是所有的控制权都交给了操作系统,由操作系统来负责线程间的调度和线程的状态变更。而这样会出现频繁地对线程运行状态的切换,线程的挂起和唤醒,从而消耗大量的系统资。

13、锁优化技术(锁粗化、锁消除)

锁粗化就是告诉我们任何事情都有个度,有些情况下我们反而希望把很多次锁的请求合并成一个请求,以降低短时间内大量锁请求、同步、释放带来的性能损耗。

举个例子:有个循环体,内部。

for(int i=0;i<size;i++){
    synchronized(lock){
        ...业务处理,省略
    }
}

经过锁粗化的代码如下:

synchronized(lock){
    for(int i=0;i<size;i++){
        ...业务处理,省略
    }
}

锁消除指的是在某些情况下,JVM 虚拟机如果检测不到某段代码被共享和竞争的可能性,就会将这段代码所属的同步锁消除掉,从而到底提高程序性能的目的。

锁消除的依据是逃逸分析的数据支持,如 StringBuffer 的 append() 方法,或 Vector 的 add() 方法,在很多情况下是可以进行锁消除的,比如以下这段代码:

public String method() {
    StringBuffer sb = new StringBuffer();
    for (int i = 0; i < 10; i++) {
        sb.append("i:" + i);
    }
    return sb.toString();
}

以上代码经过编译之后的字节码如下:

What are the implementation methods of locking in Java?

从上述结果可以看出,之前我们写的线程安全的加锁的 StringBuffer 对象,在生成字节码之后就被替换成了不加锁不安全的 StringBuilder 对象了,原因是 StringBuffer 的变量属于一个局部变量,并且不会从该方法中逃逸出去,所以我们可以使用锁消除(不加锁)来加速程序的运行。

The above is the detailed content of What are the implementation methods of locking in Java?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:yisu.com. If there is any infringement, please contact admin@php.cn delete