Home >Java >javaTutorial >How to improve the performance of locks in java
After introducing thread deadlock detection to Plumbr two months ago, we started to receive inquiries similar to this: "Great! Now I know the cause of the performance problem in the program, but what should I do next? "
We strive to think of solutions to the problems encountered by our products, but in this article I will share with you several commonly used techniques, including detached locks, parallel data structures, and protecting data rather than code. , Reduce the scope of the lock, these techniques allow us to detect deadlocks without using any tools.
Locks are not the root of the problem, competition between locks is
Usually when you encounter performance problems in multi-threaded code, you usually complain that it is a lock problem. After all, locks are known to slow down programs and make them less scalable. Therefore, if you start optimizing your code with this "common sense", the result is likely to be annoying concurrency problems later.
Therefore, it is very important to understand the difference between contention locks and non-contention locks. Lock contention is triggered when one thread attempts to enter a synchronized block or method that is being executed by another thread. The thread will be forced into a waiting state until the first thread has finished executing the synchronized block and has released the monitor. When only one thread attempts to execute a synchronized code region at a time, the lock remains uncontested.
In fact, in non-race situations and in most applications, the JVM has already optimized synchronization. Uncontested locks do not incur any additional overhead during execution. Therefore, you shouldn't complain about locks because of performance issues, you should complain about lock contention. With this understanding in mind, let's look at what we can do to reduce the likelihood or duration of competition.
Protect data not code
A quick way to solve thread safety issues is to lock the accessibility of the entire method. For example, the following example attempts to establish an online poker game server through this method:
class GameServer { public Map<<String, List<Player>> tables = new HashMap<String, List<Player>>(); public synchronized void join(Player player, Table table) { if (player.getAccountBalance() > table.getLimit()) { List<Player> tablePlayers = tables.get(table.getId()); if (tablePlayers.size() < 9) { tablePlayers.add(player); } } } public synchronized void leave(Player player, Table table) {/*body skipped for brevity*/} public synchronized void createTable() {/*body skipped for brevity*/} public synchronized void destroyTable(Table table) {/*body skipped for brevity*/} }
The author's intention is good - when a new player joins the table, it must be ensured that the number of players on the table will not The total number of players that the table can accommodate exceeds 9.
But this solution actually requires controlling players whenever they enter the card table - even when the server's access volume is small. Those threads waiting for the lock to be released are bound to trigger the system frequently. Competition events. Locking blocks that include checks on account balances and table limits are likely to significantly increase the overhead of calling operations, which will undoubtedly increase the likelihood and duration of contention.
The first step in the solution is to ensure that we are protecting the data and not the synchronization statement moved from the method declaration to the method body. For the simple example above, it may not change much. But we have to think about it from the interface of the entire game service, not just a join() method.
class GameServer { public Map<String, List<Player>> tables = new HashMap<String, List<Player>>(); public void join(Player player, Table table) { synchronized (tables) { if (player.getAccountBalance() > table.getLimit()) { List<Player> tablePlayers = tables.get(table.getId()); if (tablePlayers.size() < 9) { tablePlayers.add(player); } } } } public void leave(Player player, Table table) {/* body skipped for brevity */} public void createTable() {/* body skipped for brevity */} public void destroyTable(Table table) {/* body skipped for brevity */} }
Originally it might just be a small change, but it affects the behavior of the entire class. The previous synchronization method would lock the entire GameServer instance whenever a player joined the table, creating competition with players trying to leave the table at the same time. Moving the lock from the method declaration into the method body delays the loading of the lock, thereby reducing the possibility of lock contention.
Narrow the scope of locks
Now, after we are convinced that it is data that needs to be protected rather than programs, we should make sure that we only lock where necessary - such as when the above code is refactored:
public class GameServer { public Map<String, List<Player>> tables = new HashMap<String, List<Player>>(); public void join(Player player, Table table) { if (player.getAccountBalance() > table.getLimit()) { synchronized (tables) { List<Player> tablePlayers = tables.get(table.getId()); if (tablePlayers.size() < 9) { tablePlayers.add(player); } } } } //other methods skipped for brevity }
In this way, the code that may cause time-consuming operations including detecting the player account balance (which may cause IO operations) is moved outside the scope of lock control. Note that the lock is now only used to prevent the number of players from exceeding the table's capacity, and the account balance check is no longer part of this protection.
Separate Lock
You can clearly see from the last line of code in the above example: the entire data structure is protected by the same lock. Considering that there may be thousands of tables in this data structure, and we must protect the number of people at any one table from exceeding capacity, there is still a high risk of contention events in such a situation.
A simple solution to this is to introduce separate locks for each table, as shown in the following example:
public class GameServer { public Map<String, List<Player>> tables = new HashMap<String, List<Player>>(); public void join(Player player, Table table) { if (player.getAccountBalance() > table.getLimit()) { List<Player> tablePlayers = tables.get(table.getId()); synchronized (tablePlayers) { if (tablePlayers.size() < 9) { tablePlayers.add(player); } } } } //other methods skipped for brevity }
Now, we only synchronize the accessibility of a single table instead of all cards table, which significantly reduces the possibility of lock contention. To give a specific example, if we now have 100 instances of poker tables in our data structure, then the possibility of competition will now be 100 times smaller than before.
Use thread-safe data structures
另一个可以改善的地方就是抛弃传统的单线程数据结构,改用被明确设计为线程安全的数据结构。例如,当采用ConcurrentHashMap来储存你的牌桌实例时,代码可能像下面这样:
public class GameServer { public Map<String, List<Player>> tables = new ConcurrentHashMap<String, List<Player>>(); public synchronized void join(Player player, Table table) {/*Method body skipped for brevity*/} public synchronized void leave(Player player, Table table) {/*Method body skipped for brevity*/} public synchronized void createTable() { Table table = new Table(); tables.put(table.getId(), table); } public synchronized void destroyTable(Table table) { tables.remove(table.getId()); } }
在join()和leave()方法内部的同步块仍然和先前的例子一样,因为我们要保证单个牌桌数据的完整性。ConcurrentHashMap 在这点上并没有任何帮助。但我们仍然会在increateTable()和destoryTable()方法中使用ConcurrentHashMap创建和销毁新的牌桌,所有这些操作对于ConcurrentHashMap来说是完全同步的,其允许我们以并行的方式添加或减少牌桌的数量。
其他一些建议和技巧
降低锁的可见度。在上面的例子中,锁被声明为public(对外可见),这可能会使得一些别有用心的人通过在你精心设计的监视器上加锁来破坏你的工作。
通过查看java.util.concurrent.locks 的API来看一下 有没有其它已经实现的锁策略,使用其改进上面的解决方案。
使用原子操作。在上面正在使用的简单递增计数器实际上并不要求加锁。上面的例子中更适合使用 AtomicInteger代替Integer作为计数器。
最后一点,无论你是否正在使用Plumber的自动死锁检测解决方案,还是手动从线程转储获得解决办法的信息,都希望这篇文章可以为你解决锁竞争的问题带来帮助。