Home  >  Article  >  Database  >  What are MySQL locks and classifications?

What are MySQL locks and classifications?

WBOY
WBOYforward
2023-05-28 19:18:23847browse

What are MySQL locks and classifications?

1. Database concurrency scenario

In high concurrency scenarios, without considering other middleware, the following scenarios will exist in the database:

  • Read: There is no problem and no concurrency control is required.

  • Reading and writing: There are thread safety issues, which may cause transaction isolation problems, and may encounter dirty reads, phantom reads, and non-repeatable reads.

  • Write: There are thread safety issues, and there may be update loss issues, such as the first type of update being lost and the second type of update being lost.

In response to the above problems, the SQL standard stipulates that different problems may occur under different isolation levels:

MySQL four major isolation levels:

##READ COMMITTED: Read committed SolutionMay happenMay happenREPEATABLE READ: Repeatable readSolutionSolutionPossible occurrenceSERIALIZABLE: SerializableSolutionSolutionsolve

It can be seen that MySQL actually solves the non-repeatability problem at the REPEATABLE READ isolation level. Basically solves the phantom reading problem, but in extreme cases, phantom reading still exists.

So what is the solution? Generally speaking, there are two solutions:

1️⃣ MVCC for read operations and locking for write operations

For reads, under RR level MVCC , when a transaction is started, a ReadView will be generated, and then the historical version that meets the conditions will be found through ReadView. This version is constructed from the undo log. When the ReadView is generated, a snapshot is actually generated, so at this time The SELECT query is snapshot read (or consistent read). We know that under RR, only the first SELECT operation during the execution of a transaction will generate a ReadView , subsequent SELECT operations reuse this ReadView, thus avoiding the problems of non-repeatable reading and phantom reading to a large extent.

For writing, since no lock operation is performed on any record in the table during snapshot reading or consistent reading and the transaction of ReadView is a historical version, for writing The latest version of the operation does not conflict, so other transactions are free to make changes to the records in the table.

2️⃣ Read and write operations are locked

If some of our business scenarios do not allow reading the old version of the record, but the record must be read every time The latest version, for example, in a bank deposit transaction, you need tofirst read out the balance of the account, then add itto the amount of this deposit, and finally Write to database. After reading the account balance, you do not want other transactions to access the balance. Only until the current deposit transaction is completed can other transactions access the account balance. In this way, the record needs to be locked when reading it, which means that read operations and write operations are also queued like write-write operations.

For dirty reading, it is because the current transaction reads a record written by another uncommitted transaction, but if another transaction is writing a record If the record is locked, then the current transaction will no longer be able to read the record, so there will be no dirty read problem.

For non-repeatable reading, it is because the current transaction reads a record first, and after another transaction makes changes to the record and commits it, the current transaction will obtain different results when it reads again value, if the record is locked when the current transaction reads the record, then another transaction cannot modify the record, and naturally non-repeatable reading will not occur.

For phantom reading, it is because the current transaction reads a record in a range, and then another transaction inserts a new record into the range, when the current transaction reads the records in this range again, it finds newly inserted new records. We call the newly inserted records phantom records.

How to understand this range? As follows:

  • Suppose there is only one piece of data with

    id=1 in the table user.

  • When transaction A executes a query operation of

    id = 1, the data can be queried. If it is a range query, such as id in (1, 2), only one piece of data will be queried.

  • At this time, transaction B performs a new operation with

    id = 2 and submits it.

  • At this time, transaction A executes the query of

    id in(1,2) again, and 2 records will be read, thus phantom reading occurs.

Note: Due to the repeatable reading of RR, the record with id = 2 cannot actually be found, so if you execute it once update ... where id = 2, you can find it out by searching the range.

It is not easy to solve the phantom reading problem by locking, because the phantom records do not exist when the current transaction reads the records for the first time, so it is a bit troublesome to lock when reading. , because I don’t know who to lock.

So how does InnoDB solve it? Let's first take a look at what locks the InnoDB storage engine has.

2. Locks and classifications in MySQL

In the official MySQL documentation, the InnoDB storage engine introduces the following types of locks:

What are MySQL locks and classifications?

Similarly, it still seems confusing, but we can classify it according to the way of learning locks in JDK:

What are MySQL locks and classifications?

3. Granular classification of locks

What is lock granularity? The so-called lock granularity refers to the scope of what you want to lock.

For example, if you go to the bathroom at home, you only need to lock the bathroom. You don’t need to lock the entire home to prevent family members from entering. The bathroom is your locking granularity.

What is a reasonable locking granularity?

In fact, the bathroom is not only used for going to the toilet, but also for taking a shower and washing hands. This involves the issue of optimizing locking granularity.

When you take a shower in the bathroom, others can actually go in and wash their hands at the same time, as long as they are isolated. If the toilet, bathtub, and washbasin are all separated and relatively independent (wet and dry are separated, it is ), in fact, the bathroom can be used by three people at the same time. Of course, the three people cannot do the same thing. This refines the granularity of the lock. You only need to close the bathroom door when taking a shower, and others can still go in and wash their hands. If different functional areas are not separated when designing a bathroom, bathroom resources cannot be maximized.

Similarly, there is also lock granularity in MySQL. Usually divided into three types, row lock, table lock and page lock.

3.1 Row lock

In the introduction of shared locks and exclusive locks, they are actually recorded for a certain row, so they can also be called row locks.

Locking a record only affects this record, so the locking granularity of row locks is the finest in MySQL. The default lock of InnoDB storage engine is row lock.

It has the following characteristics:

  1. The lowest probability of lock conflict and high concurrency

    Due to the small granularity of row locks, Therefore, the probability of lock resource contention is also minimal, so the probability of lock conflict is low and the concurrency is higher.

  2. High overhead and slow locking

    Locks are very performance consuming. Just imagine, if multiple pieces of data in the database are locked, It will inevitably occupy a lot of resources, and for locking, you need to wait for the previous lock to be released before locking.

  3. Will produce a deadlock

    As for what a deadlock is, you can read on.

3.2 Table lock

Table-level lock is a table-level lock, which will lock the entire table, which can avoid deadlock very well. It is also the most granular locking mechanism in MySQL.

The default lock of the MyISAM storage engine is the table lock.

It has the following characteristics:

  1. Low overhead and fast locking

    Because the entire table is locked, The speed must be faster than locking a single piece of data.

  2. No deadlock will occur

    The entire table is locked, and other transactions cannot get the lock at all, so naturally they will not A deadlock occurs.

  3. The lock granularity is large, the probability of lock conflict is high, and the concurrency is low

3.3 Page lock

Page-level locking is a unique locking level in MySQL and is not common in other database management software.

The granularity of page-level locks is between row-level locks and table-level locks, so the resource overhead required to obtain locks and the concurrent processing capabilities they can provide are also between the above two. . In addition, page-level locks, like row-level locks, can cause deadlocks.

Isolation level Dirty read Non-repeatable read Phantom read
READ UNCOMMITTED: Uncommitted read May happen May happen May happen
##Concurrency performanceHighLowGeneralPerformance overheadBigSmallBetween the twoIs there a deadlockwhether or not

4. Lock compatibility classification

In MySQL, data reading is mainly divided into current reading and snapshot reading:

  • Snapshot reading

    Snapshot read reads snapshot data. Ordinary SELECT without locking is a snapshot read.

    SELECT * FROM table WHERE ...
  • Current reading

    The current reading is to read the latest data, not the historical data, lock SELECT, or adding, deleting or modifying data will be read currently.

    SELECT * FROM table LOCK IN SHARE MODE;
    SELECT FROM table FOR UPDATE;
    INSERT INTO table values ...
    DELETE FROM table WHERE ...
    UPDATE table SET ...

In most cases, we operate the database in the current read situation, and in concurrent scenarios, we must allow read- The read situation is not affected, and if you want write-write, read-write or write-read operations to block each other, you need to use shared locks and exclusives in MySQL Lock.

4.1 Shared locks and exclusive locks

Shared locks (Shared Locks) can also be called read locks, referred to as S locks. Data can be read concurrently, but no transaction can modify the data.

Exclusive Locks (Exclusive Locks), can also be called exclusive locks or write locks, referred to as X locks. If something adds an exclusive lock to a row, only this transaction can read and write it. Before the end of this transaction, other transactions cannot add any locks to it. Other processes can read but cannot perform write operations. Need to wait for its release.

Let’s analyze the situation of acquiring the lock: If there are transaction A and transaction B

  • Transaction A acquires the S lock of a record, and transaction B also wants to acquire the S lock at this time If the S lock of a record is obtained, transaction B can also acquire the lock. That is to say, transaction A and transaction B hold the S lock of the record at the same time.

  • If transaction B wants to acquire the X lock of the record, this operation will be blocked until the S lock is released after transaction A commits.

  • If transaction A first acquires the X lock, whether transaction B wants to acquire the S lock or the X lock of the record, it will be blocked until transaction A commits.

Therefore, we can say that S lock and S lock are compatible, S lock and X lock are incompatible, and X lock and X lock are also incompatible.

4.2 Intention Lock

Intention Shared Lock (Intention Shared Lock), referred to as IS Lock. When a transaction is going to add an S lock to a record, it needs to first add an IS lock at the table level.

Intention Exclusive Lock (Intention Exclusive Lock), referred to as IX Lock. When a transaction is going to add an X lock to a record, it needs to first add an IX lock at the table level.

Intention locks are table-level locks. They are proposed only to quickly determine whether the records in the table are added when table-level S locks and X locks are added later. is locked to avoid traversing to see if there are any locked records in the table. That is to say, IS lock is compatible with IS lock, and IX lock is compatible with IX lock.

Why do we need intention locks?

InnoDB’s intention lock is mainly used when multiple granularity locks coexist. For example, transaction A wants to add an S lock to a table. If a row in the table has been added to an X lock by transaction B, then the application for the lock should also be blocked. If there is a lot of data in the table, the overhead of checking the lock flag row by row will be very high, and the performance of the system will be affected. For example, if there are 100 million records in the table, and transaction A locks rows on several of the records, then transaction B needs to add table-level locks to the table. If there is no intention lock, Then you need to find out in the table whether these 100 million records are locked. If there is an intention lock, then if transaction A adds an intention lock and then an X lock before updating a record, transaction B first checks whether there is an intention lock on the table and whether the existing intention lock conflicts with the lock it plans to add. If there is a conflict, wait until transaction A is released without checking each record. Transaction B does not need to know exactly which row is locked when updating the table, it only needs to know that at least one row has been locked.

To put it bluntly, the main function of intention locks is to resolve the contradiction between row locks and table locks. It can display

that a transaction is holding a lock on a certain row, or is preparing to hold a lock

.

Compatibility of various locks at table level

:


Row lock Table lock Page lock
Lock granularity small between the two
add Lock efficiency Slow Fast Between the two
Conflict probability Low High -
S X SIS XIS

4.3 Locks for read operations

For MySQL read operations, there are two ways to lock.

1️⃣ SELECT * FROM table LOCK IN SHARE MODE

If the current transaction executes this statement, then it will add S lock to the read record , this allows other transactions to continue to acquire the S locks of these records (for example, other transactions also use the SELECT ... LOCK IN SHARE MODE statement to read these records), but cannot acquire these records. X lock on the record (for example, use the SELECT ... FOR UPDATE statement to read these records, or directly modify these records).

If other transactions want to acquire the X locks of these records, they will block until the S locks on these records are released after the current transaction commits

2️⃣ SELECT FROM table FOR UPDATE

If the current transaction executes this statement, it will add X lock to the read records, so that other transactions are not allowed to obtain these records S lock (for example, other transactions use SELECT ... LOCK IN SHARE MODE statement to read these records), and X locks of these records are not allowed to be acquired (for example, using SELECT .. . FOR UPDATE statement to read these records, or modify these records directly).

If other transactions want to acquire the S lock or X lock of these records, they will block until the X lock on these records is released after the current transaction is committed.

4.4 Locks for write operations

DELETE, UPDATE, and INSERT are common write operations in MySQL. Implicit locking, automatic locking, and unlocking.

1️⃣ DELETE

The process of performing a DELETE operation on a record is actually to first locate the position of the record in the B-tree, and then obtain the record's X lock, and then perform the delete mark operation. We can also understand this process as using the lock read method of acquiring the X lock to locate the position of the record to be deleted in the B tree.

2️⃣ INSERT

Under normal circumstances, the operation of inserting a new record is not locked. InnoDB protects this new record through a type of implicit lock. The inserted records will not be accessed by other transactions before this transaction is committed.

3️⃣ UPDATE

There are three situations when performing an UPDATE operation on a record:

① If the key value of the record is not modified and is If the storage space occupied by the updated column has not changed before and after the modification, first locate the position of this record in the B-tree, then obtain the X lock of the record, and finally perform the modification operation at the position of the original record. We can also think of the process of recording the location to be modified in the B-tree as a locking read operation to acquire the X lock.

② If the key value of the record has not been modified and the storage space occupied by at least one updated column has changed before and after the modification, first locate the position of this record in the B-tree, and then obtain it. X lock on the record, completely delete the record (that is, completely move the record to the garbage list), and finally insert a new record. This process of locating the position of the record to be modified in the B-tree is regarded as a locked read to obtain the X lock. The newly inserted record is protected by the implicit lock provided by the INSERT operation.

③ If the key value of the record is modified, it is equivalent to performing a DELETE operation on the original record and then performing an INSERT operation. The locking operation needs to be performed according to the rules of DELETE and INSERT.

PS: Why can other transactions still read when the write lock is locked??

Because InnoDB has an MVCC mechanism (multi-version concurrency control), snapshot reads can be used without being blocked.

4. Lock granularity classification

What is lock granularity? The so-called lock granularity refers to the scope of what you want to lock.

For example, if you go to the bathroom at home, you only need to lock the bathroom. You don’t need to lock the entire home to prevent family members from entering. The bathroom is your locking granularity.

What is a reasonable locking granularity?

In fact, the bathroom is not only used for going to the toilet, but also for taking a shower and washing hands. This involves the issue of optimizing locking granularity.

When you take a shower in the bathroom, others can actually go in and wash their hands at the same time, as long as they are isolated. If the toilet, bathtub, and washbasin are all separated and relatively independent (wet and dry are separated, it is ), in fact, the bathroom can be used by three people at the same time. Of course, the three people cannot do the same thing. This refines the granularity of the lock. You only need to close the bathroom door when taking a shower, and others can still go in and wash their hands. If different functional areas are not separated when designing a bathroom, bathroom resources cannot be maximized.

Similarly, there is also lock granularity in MySQL. Usually divided into three types, row lock, table lock and page lock.

4.1 Row lock

In the introduction of shared locks and exclusive locks, they are actually recorded for a certain row, so they can also be called row locks.

Locking a record only affects this record, so the locking granularity of row locks is the finest in MySQL. The default lock of InnoDB storage engine is row lock.

It has the following characteristics:

  • The lowest probability of lock conflict and high concurrency

    Because the granularity of row locks is small, the probability of lock resource contention is also the smallest, resulting in lock conflicts The lower the probability, the higher the concurrency.

  • High overhead and slow locking

    Locks are very performance consuming. Just imagine, if multiple pieces of data in the database are locked, It will inevitably occupy a lot of resources, and for locking, you need to wait for the previous lock to be released before locking.

  • Will produce a deadlock

    As for what a deadlock is, you can read on.

4.2 Table lock

Table-level lock is a table-level lock, which will lock the entire table, which can avoid deadlock very well. It is also the most granular locking mechanism in MySQL.

The default lock of the MyISAM storage engine is the table lock.

It has the following characteristics:

  • Low overhead and fast locking

    Because the entire table is locked, The speed must be faster than locking a single piece of data.

  • No deadlock will occur

    The entire table is locked, and other transactions cannot get the lock at all, so naturally they will not A deadlock occurs.

  • The lock granularity is large, the probability of lock conflict is high, and the concurrency is low

4.3 Page lock

Page-level locking is a unique locking level in MySQL and is not common in other database management software.

The granularity of page-level locks is between row-level locks and table-level locks, so the resource overhead required to obtain locks and the concurrent processing capabilities they can provide are also between the above two. . In addition, page-level locks, like row-level locks, can cause deadlocks.


IS IX
Compatible Compatible Incompatible Incompatible
Compatible Compatible Incompatible Incompatible
Not compatible Not compatible Not compatible Not compatible
Compatible Compatible Incompatible Incompatible
##Concurrency performanceHighLowGeneralPerformance overheadBigSmallBetween the twoIs there a deadlockwhether or not

5. 算法实现分类

对于上面的锁的介绍,我们实际上可以知道,主要区分就是在锁的粒度上面,而 InnoDB 中用的锁就是行锁,也叫记录锁,但是要注意,这个记录指的是通过给索引上的索引项加锁。

InnoDB 这种行锁实现特点意味着:只有通过索引条件检索数据,InnoDB 才使用行级锁,否则,InnoDB 将使用表锁。

不论是使用主键索引、唯一索引或普通索引,InnoDB 都会使用行锁来对数据加锁。

只有执行计划真正使用了索引,才能使用行锁:即便在条件中使用了索引字段,但是否使用索引来检索数据是由 MySQL 通过判断不同执行计划的代价来决 定的,如果 MySQL 认为全表扫描效率更高,比如对一些很小的表,它就不会使用索引,这种情况下 InnoDB 将使用表锁,而不是行锁。

同时当我们用范围条件而不是相等条件检索数据,并请求锁时,InnoDB 会给符合条件的已有数据记录的索引项加锁。

不过即使是行锁,InnoDB 里也是分成了各种类型的。换言之,即使对同一条记录加上行锁,不同的锁类型也会产生不同的效果。通常有以下几种常用的行锁类型。

5.1 Record Lock

记录锁,单条索引记录上加锁

Record Lock 锁住的永远是索引,不包括记录本身,即使该表上没有任何索引,那么innodb会在后台创建一个隐藏的聚集主键索引,那么锁住的就是这个隐藏的聚集主键索引。

记录锁是有 S 锁和 X 锁之分的,当一个事务获取了一条记录的 S 型记录锁后,其他事务也可以继续获取该记录的 S 型记录锁,但不可以继续获取 X 型记录锁;当一个事务获取了一条记录的 X 型记录锁后,其他事务既不可以继续获取该记录的 S 型记录锁,也不可以继续获取 X 型记录锁。

5.2 Gap Locks

间隙锁,对索引前后的间隙上锁,不对索引本身上锁。

MySQL 在 REPEATABLE READ 隔离级别下是可以解决幻读问题的,解决方案有两种,可以使用 MVCC 方案解决,也可以采用加锁方案解决。但是在使用加锁方案解决时有问题,就是事务在第一次执行读取操作时,那些幻影记录尚 不存在,我们无法给这些幻影记录加上记录锁。所以我们可以使用间隙锁对其上锁。

如存在这样一张表:

CREATE TABLE test (
    id INT (1) NOT NULL AUTO_INCREMENT,
    number INT (1) NOT NULL COMMENT '数字',
    PRIMARY KEY (id),
    KEY number (number) USING BTREE
) ENGINE = INNODB AUTO_INCREMENT = 1 DEFAULT CHARSET = utf8;

# 插入以下数据
INSERT INTO test VALUES (1, 1); 
INSERT INTO test VALUES (5, 3); 
INSERT INTO test VALUES (7, 8); 
INSERT INTO test VALUES (11, 12);

如下:

开启一个事务 A:

BEGIN;

SELECT * FROM test WHERE number = 3 FOR UPDATE;

此时,会对((1,1),(5,3))((5,3),(7,8))之间上锁。

What are MySQL locks and classifications?

如果此时在开启一个事务 B 进行插入数据,如下:

BEGIN;

# 阻塞
INSERT INTO test (id, number) VALUES (2,2);

结果如下:

What are MySQL locks and classifications?

为什么不能插入?因为记录(2,2)要 插入的话,在索引 number上,刚好落在((1,1),(5,3))((5,3),(7,8))之间,是有锁的,所以不允许插入。 如果在范围外,当然是可以插入的,如:

INSERT INTO test (id, number) VALUES (8,8);

5.3 Next-Key Locks

next-key locks 是索引记录上的记录锁和索引记录之前的间隙上的间隙锁的组合,包括记录本身,每个 next-key locks 是前开后闭区间,也就是说间隙锁只是锁的间隙,没有锁住记录行,next-key locks 就是间隙锁基础上锁住右边界行

InnoDB 默认使用 REPEATABLE READ 隔离级别。在这种情况下,InnoDB 使用 Next-Key Locks 锁进行搜索和索引扫描,这可以防止幻读的发生。

6. 乐观锁和悲观锁

乐观锁和悲观锁其实不算是具体的锁,而是一种锁的思想,不仅仅是在 MySQL 中体现,常见的 Redis 等中间件都可以应用这种思想。

6.1 乐观锁

所谓乐观锁,就是持有乐观的态度,当我们更新一条记录时,假设这段时间没有其他人来操作这条数据。

实现乐观锁常见的方式

常见的实现方式就是在表中添加 version字段,控制版本号,每次修改数据后+1

在每次更新数据之前,先查询出该条数据的 version版本号,再执行业务操作,然后在更新数据之前在把查到的版本号和当前数据库中的版本号作对比,若相同,则说明没有其他线程修改过该数据,否则作相应的异常处理。

6.2 悲观锁

所谓悲观锁,就是持有悲观的态度,一开始就假设改数据会被别人修改。

悲观锁的实现方式有两种

共享锁(读锁)和排它锁(写锁),参考上面。

7. 死锁

是指两个或两个以上的进程在执行过程中,由于竞争资源或者由于彼此通信而造成的一种阻塞的现象,若无外力作用,它们都将无法推进下去。此时称系统 处于死锁状态或系统产生了死锁。

产生的条件

  • 互斥条件:一个资源每次只能被一个进程使用;

  • 请求与保持条件:一个进程因请求资源而阻塞时,对已获得的资源保持不放;

  • 不剥夺条件:进程已获得的资源,在没有使用完之前,不能强行剥夺;

  • 循环等待条件:多个进程之间形成的一种互相循环等待的资源的关系。

MySQL 中其实也是一样的,如下还是这样一张表:

CREATE TABLE `user` (
  `id` bigint NOT NULL COMMENT '主键',
  `name` varchar(20) DEFAULT NULL COMMENT '姓名',
  `sex` char(1) DEFAULT NULL COMMENT '性别',
  `age` varchar(10) DEFAULT NULL COMMENT '年龄',
  `url` varchar(40) DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `suf_index_url` (`name`(3)) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;

# 数据
INSERT INTO `user` (`id`, `name`, `sex`, `age`, `url`) VALUES ('1', 'a', '1', '18', 'https://javatv.net');
INSERT INTO `user` (`id`, `name`, `sex`, `age`, `url`) VALUES ('2', 'b', '1', '18', 'https://javatv.net');

按照如下顺序执行:


Row lock Table lock Page lock
Lock granularity small between the two
add Lock efficiency Slow Fast Between the two
Conflict probability Low High -

A B
BEGIN

BEGIN
SELECT * FROM user WHERE name='a' FOR UPDATE

SELECT * FROM user WHERE name='b' FOR UPDATE
SELECT * FROM user WHERE name='b' FOR UPDATE

SELECT * FROM user WHERE name='a' FOR UPDATE

1、开启 A、B 两个事务;

2、首先 A 先查询name='a'的数据,然后 B 也查询name='b'的数据;

3、在 B 没释放锁的情况下,A 尝试对 name='b'的数据加锁,此时会阻塞;

4、若此时,事务 B 在没释放锁的情况下尝试对 name='a'的数据加锁,则产生死锁。

What are MySQL locks and classifications?

此时,MySQL 检测到了死锁,并结束了 B 中事务的执行,此时,切回事务 A,发现原本阻塞的 SQL 语句执行完成了。可通过show engine innodb status \G查看死锁。

如何避免

从上面的案例可以看出,死锁的关键在于:两个(或以上)的 Session 加锁的顺序不一致,所以我们在执行 SQL 操作的时候要让加锁顺序一致,尽可能一次性锁定所需的数据行

The above is the detailed content of What are MySQL locks and classifications?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:yisu.com. If there is any infringement, please contact admin@php.cn delete