Recently, I have encountered some high-concurrency scenarios at work that require locking to ensure the correctness of the business logic, and it is required that the performance should not be greatly affected after locking. The initial idea is to lock through the timestamp, id and other keywords of the data to ensure the concurrency of different types of data processing. The lock granularity provided by Java itselfapi is too large, making it difficult to meet these needs at the same time, so I wrote a few simple extensions myself...
Draw on the segmentation idea of concurrentHashMap, first generate a certain number of locks, and then return the corresponding lock based on the key when used. This is the simplest and highest performing among several implementations, and it is also the lock strategy that was finally adopted. The code is as follows:
/** * 分段锁,系统提供一定数量的原始锁,根据传入对象的哈希值获取对应的锁并加锁 * 注意:要锁的对象的哈希值如果发生改变,有可能导致锁无法成功释放!!! */ public class SegmentLock<T> { private Integer segments = 16;//默认分段数量 private final HashMap<Integer, ReentrantLock> lockMap = new HashMap<>(); public SegmentLock() { init(null, false); } public SegmentLock(Integer counts, boolean fair) { init(counts, fair); } private void init(Integer counts, boolean fair) { if (counts != null) { segments = counts; } for (int i = 0; i < segments; i++) { lockMap.put(i, new ReentrantLock(fair)); } } public void lock(T key) { ReentrantLock lock = lockMap.get(key.hashCode() % segments); lock.lock(); } public void unlock(T key) { ReentrantLock lock = lockMap.get(key.hashCode() % segments); lock.unlock(); } }
was developed on the basis of the above segmented lock. The second lock strategy aims to achieve true fine-grained locking. Each object with a different hash value can obtain its own independent lock. In tests, when the locked code executed very quickly, the efficiency was about 30% slower than segmented locking. If there are long-term operations, it feels like the performance should be better. The code is as follows:
public class HashLock<T> { private boolean isFair = false; private final SegmentLock<T> segmentLock = new SegmentLock<>();//分段锁 private final ConcurrentHashMap<T, LockInfo> lockMap = new ConcurrentHashMap<>(); public HashLock() { } public HashLock(boolean fair) { isFair = fair; } public void lock(T key) { LockInfo lockInfo; segmentLock.lock(key); try { lockInfo = lockMap.get(key); if (lockInfo == null) { lockInfo = new LockInfo(isFair); lockMap.put(key, lockInfo); } else { lockInfo.count.incrementAndGet(); } } finally { segmentLock.unlock(key); } lockInfo.lock.lock(); } public void unlock(T key) { LockInfo lockInfo = lockMap.get(key); if (lockInfo.count.get() == 1) { segmentLock.lock(key); try { if (lockInfo.count.get() == 1) { lockMap.remove(key); } } finally { segmentLock.unlock(key); } } lockInfo.count.decrementAndGet(); lockInfo.unlock(); } private static class LockInfo { public ReentrantLock lock; public AtomicInteger count = new AtomicInteger(1); private LockInfo(boolean fair) { this.lock = new ReentrantLock(fair); } public void lock() { this.lock.lock(); } public void unlock() { this.lock.unlock(); } } }
Hash lock always feels a little flawed because of the introduction of segmented locks to ensure the synchronization of lock creation and destruction. So a third lock was written to seek better performance and more fine-grained locks. The idea of this lock is to use Java's weak reference to create the lock, and hand over the destruction of the lock to the jvm's garbage collection to avoid additional consumption.
It is a pity that because ConcurrentHashMap is used as the lock container, it cannot really get rid of the segmentation lock. The performance of this lock is about 10% faster than HashLock. Lock code:
/** * 弱引用锁,为每个独立的哈希值提供独立的锁功能 */ public class WeakHashLock<T> { private ConcurrentHashMap<T, WeakLockRef<T, ReentrantLock>> lockMap = new ConcurrentHashMap<>(); private ReferenceQueue<ReentrantLock> queue = new ReferenceQueue<>(); public ReentrantLock get(T key) { if (lockMap.size() > 1000) { clearEmptyRef(); } WeakReference<ReentrantLock> lockRef = lockMap.get(key); ReentrantLock lock = (lockRef == null ? null : lockRef.get()); while (lock == null) { lockMap.putIfAbsent(key, new WeakLockRef<>(new ReentrantLock(), queue, key)); lockRef = lockMap.get(key); lock = (lockRef == null ? null : lockRef.get()); if (lock != null) { return lock; } clearEmptyRef(); } return lock; } @SuppressWarnings("unchecked") private void clearEmptyRef() { Reference<? extends ReentrantLock> ref; while ((ref = queue.poll()) != null) { WeakLockRef<T, ? extends ReentrantLock> weakLockRef = (WeakLockRef<T, ? extends ReentrantLock>) ref; lockMap.remove(weakLockRef.key); } } private static final class WeakLockRef<T, K> extends WeakReference<K> { final T key; private WeakLockRef(K referent, ReferenceQueue<? super K> q, T key) { super(referent, q); this.key = key; } } }
At first I wanted to use locksupport and AQS to implement fine-grained locks. After writing and writing, I found that what was being implemented was not very different from Java's native locks, so I gave up. Changing to the encapsulation of Java's own lock wasted a lot of time.
In fact, after implementing these fine-grained locks, new ideas have emerged. For example, data can be submitted to specialized threads for processing through segmentation thinking, which can reduce the blocking time of a large number of threads and leave it to the future. explore…
The above is the detailed content of Detailed sample code of three ways to implement fine-grained locks in Java. For more information, please follow other related articles on the PHP Chinese website!