The problem appears
One afternoon, suddenly the system alarmed and threw an exception:
A closer look seemed to be a transaction rollback exception. It said that it was rolled back because of a deadlock. It turned out to be a deadlock problem. Since I still have a certain understanding of Mysql locks, I started to proactively investigate this problem.
First, search Innodb Status in the database. The last deadlock information will be recorded in Innodb Status. Enter the following command:
SHOW ENGINE INNODB STATUS
The deadlock information is as follows, and the sql information has been simply processed:
------------------------
LATEST DETECTED DEADLOCK
------------------------
2019-02-22 15:10:56 0x7eec2f468700
*** (1) TRANSACTION:
TRANSACTION 2660206487, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 1136, 1 row lock(s)
MySQL thread id 31261312, OS thread handle 139554322093824, query id 11624975750 10.23.134.92 erp_crm__6f73 updating
/*id:3637ba36*/UPDATE tenant_config SET
open_card_point = 0
where tenant_id = 123
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 1322 page no 534 n bits 960 index uidx_tenant of table ——erp_crm_member_plan——. ——tenant_config—— trx id 2660206487 lock_mode X locks rec but not gap waiting
*** (2) TRANSACTION:
TRANSACTION 2660206486, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
3 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 31261311, OS thread handle 139552870532864, query id 11624975758 10.23.134.92 erp_crm__6f73 updating
/*id:3637ba36*/UPDATE tenant_config SET
open_card_point = 0
where tenant_id = 123
*** (2) HOLDS THE LOCK (S):
RECORD LOCKS space id 1322 page no 534 n bits 960 index uidx_tenant of table ——erp_crm_member_plan——. ——tenant_config—— trx id 2660206486 lock mode S
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 1322 page no 534 n bits 960 index uidx_tenant of table ——erp_crm_member_plan——. ——tenant_config—— trx id 2660206486 lock_mode X locks rec but not gap waiting
*** WE ROLL BACK TRANSACTION (1)
----------------
Let me briefly analyze and explain this deadlock log. When transaction 1 executes the Update statement, it needs to obtain the uidx_tenant index and then the X lock (row lock) on the where condition. Transaction 2 executes the same Update statement and also thinks about uidx_tenant. To acquire the X lock (row lock), a deadlock occurred and transaction 1 was rolled back. I was very confused at that time, and I recalled the necessary conditions for deadlock to occur:
Mutually exclusive.
Request and hold conditions.
No deprivation of conditions.
Waiting cycle. From the log, it can be seen that transaction 1 and transaction 2 are both competing for the row lock of the same row. This is a bit different from the previous cyclic competition for locks. No matter how you look at it, they cannot satisfy the circular waiting condition. After being reminded by colleagues, since the deadlock log cannot be investigated, the problem can only be investigated from the business code and business logs. The logic of this code is as follows:
public int saveTenantConfig(PoiContext poiContext, TenantConfigDO tenantConfig) {
try {
return tenantConfigMapper.saveTenantConfig(poiContext.getTenantId(), poiContext.getPoiId(), tenantConfig);
} catch (DuplicateKeyException e) {
LOGGER.warn("[saveTenantConfig] primary key conflict, update the record. context:{}, config:{}", poiContext, tenantConfig);
return tenantConfigMapper.updateTenantConfig(poiContext.getTenantId(), tenantConfig);
}
}
The meaning of this code is to save a configuration file. If a unique index conflict occurs, it will be updated. Of course, the writing here may not be very standardized. In fact, you can use
insert into …
on duplicate key update
The same effect can be achieved, but even if this is used, a deadlock will actually occur. After reading the code, my colleague sent me the business log at that time,
You can see that there are three logs that occurred at the same time, indicating that a unique index conflict occurred and entered the updated statement, and then a deadlock occurred. At this point the answer finally seems a little clearer.
At this time, let’s look at our table structure as follows (simplified):
CREATE TABLE ——tenant_config—— (
——id—— bigint(21) NOT NULL AUTO_INCREMENT,
——tenant_id—— int(11) NOT NULL,
——open_card_point—— int(11) DEFAULT NULL,
PRIMARY KEY (——id——),
UNIQUE KEY ——uidx_tenant—— (——tenant_id——)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 ROW_FORMAT=COMPACT
Our tenant_id is used as a unique index, and our where conditions for insertion and update are all based on the unique index.
UPDATE tenant_config SET
open_card_point = 0
where tenant_id = 123
At this point, I feel that locking the unique index during insertion is related. Let's conduct an in-depth analysis in the next step.
In-depth analysis
Above we said that three transactions enter the update statement. To simplify the explanation, we only need two transactions to enter the update statement at the same time. The following table shows our entire occurrence process:
Tip: S lock is a shared lock, X lock is a mutual exclusion lock. Generally speaking, X locks, S locks, and X locks are mutually exclusive, but S locks and S locks are not mutually exclusive.
From the above process, we see that the key to this deadlock is to acquire the S lock. Why do we need to acquire the S lock when inserting again? Because we need to detect unique index? Under the RR isolation level, if you want to read, it is the current read, so you actually need to add an S lock. It is found here that the unique key already exists. At this time, the execution of update will be blocked by the S locks of the two transactions, thus forming the above loop waiting condition.
Tips: In MVCC, the difference between current read and snapshot read: current read needs to lock each time (you can use shared lock or mutex lock) to obtain the latest data, while snapshot read reads the start of this transaction At that time, the snapshot was implemented through undo log.
This is the reason for the entire deadlock. Another situation where this kind of deadlock can occur is that there are three insert operations at the same time. If the transaction inserted first is rolled back in the end, this will also happen to the other two transactions. deadlock.
solution
The core problem here is to get rid of the S lock. Here are three solutions for reference:
Reduce the RR isolation level to the RC isolation level. Here, the RC isolation level will use snapshot reading, so no S lock will be added.
When inserting again, use select * for update to add X lock, so that S lock will not be added.
You can add distributed locks in advance, you can use Redis, or ZK, etc. For distributed locks, please refer to this article of mine. Let’s talk about distributed locks
The first method is not very realistic, after all, the isolation level cannot be easily modified. The third method is more troublesome. So the second method is what we finally settled on.
The above is the detailed content of Mysql deadlock troubleshooting example analysis. For more information, please follow other related articles on the PHP Chinese website!