Home  >  Article  >  Database  >  A detailed explanation of MySql row-level locks and table-level locks

A detailed explanation of MySql row-level locks and table-level locks

藏色散人
藏色散人forward
2021-09-21 16:45:513927browse

MySql Row-level lock Table-level lock

How to ensure the consistency and effectiveness of concurrent data access is a problem that all databases must solve. Lock conflicts are also an important factor affecting the performance of concurrent access to the database. factor. From this perspective, locks are particularly important and complex for databases.

MySQL Lock Overview

Compared with other databases, MySQL’s lock mechanism is relatively simple. Its most significant feature is that different storage engines support different lock mechanisms.
For example,

MyISAM and MEMORY storage engines use table-level locking.

The InnoDB storage engine supports both row-level locking and table-level locking, but row-level locking is used by default.

The characteristics of these three locks in MySQL can be roughly summarized as follows

Table-level locks: low overhead, fast locking; no deadlocks; large locking granularity, highest probability of lock conflicts , the lowest degree of concurrency.

Row-level locks: high overhead, slow locking; deadlocks may occur; the locking granularity is the smallest, the probability of lock conflicts is the lowest, and the concurrency is the highest.

Page lock: The cost and locking time are between table locks and row locks; deadlocks will occur; the locking granularity is between table locks and row locks, and the concurrency is average.

Just from the perspective of locks: table-level locks are more suitable for applications that are mainly query-based and only have a small amount of data updated according to index conditions, such as Web applications; while row-level locks are more suitable for applications with a large number of data based on index conditions. Index conditions update a small amount of different data concurrently, and there are concurrent query applications, such as some online transaction processing (OLTP) systems. In the following sections, we focus on the issues of MySQL table locks and InnoDB row locks.

MyISAM table lock

The MyISAM storage engine only supports table locks, which is also the only lock type supported in the first few versions of MySQL. With the continuous improvement of application requirements for transaction integrity and concurrency, MySQL began to develop transaction-based storage engines. Later, the BDB storage engine that supports page locks and the InnoDB storage engine that supports row locks slowly appeared (actually InnoDB is a separate A company that has now been acquired by Oracle). However, MyISAM's table lock is still the most widely used lock type. This section will introduce the use of MyISAM table locks in detail.

Query table-level lock contention

You can analyze the table lock contention on the system by checking the table_locks_waited and table_locks_immediate status variables:

mysql> show status like 'table% ';

#| Variable_name | Value |

| Table_locks_immediate | 2979 |

| Table_locks_waited | 0 |

##2 rows in set ( 0.00 sec))

If the value of Table_locks_waited is relatively high, it indicates that there is a serious table-level lock contention.

MySQL table-level lock lock mode

MySQL table-level lock has two modes: table shared read lock (Table Read Lock) and table exclusive write lock (Table Write Lock).

The read operation of the MyISAM table will not block other users' read requests for the same table, but will block the write requests for the same table;

The write operation of the MyISAM table will block other users' requests for the same table. The read and write operations of the table;

The read and write operations of the MyISAM table, and the write operations are serial;
When a thread obtains a write lock on a table, it can only hold the lock Threads can perform update operations on the table. Read and write operations from other threads will wait until the lock is released.

Get the WRITE lock of table film_text

mysql> lock table film_text write;

Query OK, 0 rows affected (0.00 sec)

The current session pair Query, update, and insert operations of the locked table can be performed:

mysql> select film_id,title from film_text where film_id = 1001;

##| film_id | title |

##| 1001 | Update Test |

1 row in set (0.00 sec)mysql> insert into film_text (film_id,title) values(1003,'Test');

Query OK, 1 row affected (0.00 sec)

mysql> update film_text set title = 'Test' where film_id = 1001;

Query OK, 1 row affected (0.00 sec )

Rows matched: 1 Changed: 1 Warnings: 0

Other sessions’ queries on the locked table are blocked and need to wait for the lock to be released:

mysql> select film_id, title from film_text where film_id = 1001;

Waiting

release lock:

mysql> unlock tables;

Query OK, 0 rows affected (0.00 sec )

Waiting

Session2 obtains the lock, the query returns:

mysql> select film_id,title from film_text where film_id = 1001;

| film_id | title |

| 1001 | Test |

##1 row in set (57.59 sec)

How to add table lock ?

MyISAM will automatically add read locks to all tables involved before executing the query statement (SELECT), and will automatically add write locks to the tables involved before performing update operations (UPDATE, DELETE, INSERT, etc.) , this process does not require user intervention, therefore, users generally do not need to directly use the LOCK TABLE command to explicitly lock the MyISAM table. In the example, explicit locking is basically done for convenience and is not required.

Display locking of MyISAM tables is generally done to simulate transaction operations to a certain extent and achieve consistent reading of multiple tables at a certain point in time. For example, there is an order table orders, which records the total amount of each order, and there is also an order detail table order_detail, which records the subtotal amount of each product of each order. Suppose we need to check the two tables. To check whether the total amounts match, you may need to execute the following two SQLs:

Select sum(total) from orders;

Select sum(subtotal) from order_detail;

At this time , if the two tables are not locked first, erroneous results may occur, because the order_detail table may have changed during the execution of the first statement. Therefore, the correct method should be:

Lock tables orders read local, order_detail read local;

Select sum(total) from orders;

Select sum(subtotal) from order_detail;

Unlock tables;

The following two points should be noted in particular.

The above example adds the "local" option when LOCK TABLES. Its function is to allow other users to concurrently insert records at the end of the table when the concurrent insertion conditions of the MyISAM table are met. Regarding concurrent insertion of the MyISAM table The problem will be introduced further later.

When using LOCK TABLES to explicitly add table locks to a table, all locks involved in the table must be obtained at the same time, and MySQL does not support lock upgrades. That is to say, after executing LOCK TABLES, you can only access the explicitly locked tables, but not the unlocked tables; at the same time, if you add a read lock, you can only perform query operations, but not update operations. . In fact, this is basically the case in the case of automatic locking. MyISAM always obtains all the locks required by the SQL statement at once. This is why MyISAM tables will not be deadlocked (Deadlock Free).

A session uses the LOCK TABLE command to add a read lock to the table film_text. This session can query the records in the locked table, but an error will be prompted when updating or accessing other tables; at the same time, another session can query the records in the table. Record, but there will be a lock wait when updating.

When using LOCK TABLES, not only do you need to lock all tables used at once, but also how many times the same table appears in the SQL statement, you must lock it through the same alias as the SQL statement, otherwise Something can go wrong!

Examples are as follows.

(1) Obtain a read lock on the actor table:

mysql> lock table actor read;

Query OK, 0 rows affected (0.00 sec)

(2) However, accessing through aliases will prompt an error:

mysql> select a.first_name,a.last_name,b.first_name,b.last_name from actor a,actor b where a.first_name = b.first_name and a.first_name = 'Lisa' and a.last_name = 'Tom' and a.last_name b.last_name;

ERROR 1100 (HY000): Table 'a' was not locked with LOCK TABLES

(3) Aliases need to be locked separately:

mysql> lock table actor as a read,actor as b read;

Query OK, 0 rows affected (0.00 sec )

(4) The query according to the alias can be executed correctly:

mysql> select a.first_name,a.last_name,b.first_name,b.last_name from actor a,actor b where a .first_name = b.first_name and a.first_name = 'Lisa' and a.last_name = 'Tom' and a.last_name b.last_name;

##| first_name | last_name | first_name | last_name |

#| Lisa | Tom | LISA | MONROE |

1 row in set (0.00 sec)

Concurrent Inserts

As mentioned above, the reading and writing of MyISAM tables are serial, but this is the overall In terms of. Under certain conditions, MyISAM tables also support concurrent query and insert operations.

MyISAM storage engine has a system variable concurrent_insert, which is specifically used to control its concurrent insertion behavior. Its value can be 0, 1 or 2 respectively.

When concurrent_insert is set to 0, concurrent insertion is not allowed.

When concurrent_insert is set to 1, if there are no holes in the MyISAM table (that is, there are no deleted rows in the middle of the table), MyISAM allows one process to read the table while another process inserts records from the end of the table. This is also the default setting for MySQL.

When concurrent_insert is set to 2, records are allowed to be inserted concurrently at the end of the table regardless of whether there are holes in the MyISAM table.

You can use the concurrent insertion feature of the MyISAM storage engine to solve lock contention for querying and inserting the same table in the application. For example, setting the concurrent_insert system variable to 2 always allows concurrent insertion; at the same time, the OPTIMIZE TABLE statement is regularly executed during the system's idle period to defragment the space and recover the intermediate holes caused by deleting records. For a detailed introduction to the OPTIMIZE TABLE statement, please refer to the section "Two Simple and Practical Optimization Methods" in Chapter 18.

MyISAM's lock scheduling

As mentioned before, the read lock and write lock of the MyISAM storage engine are mutually exclusive, and the read and write operations are serial. So, if one process requests a read lock on a MyISAM table, and at the same time another process also requests a write lock on the same table, how does MySQL handle it? The answer is that the writing process acquires the lock first. Not only that, even if the read request arrives in the lock waiting queue first and the write request arrives later, the write lock will be inserted before the read lock request! This is because MySQL considers write requests to be generally more important than read requests. This is why MyISAM tables are not suitable for applications with a large number of update operations and query operations, because a large number of update operations will make it difficult for query operations to obtain read locks, which may block forever. This situation can get really bad sometimes! Fortunately, we can adjust the scheduling behavior of MyISAM through some settings.

By specifying the startup parameter low-priority-updates, the MyISAM engine gives priority to read requests by default.

By executing the command SET LOW_PRIORITY_UPDATES=1, the priority of update requests issued by this connection is reduced.

Reduce the priority of the statement by specifying the LOW_PRIORITY attribute of the INSERT, UPDATE, and DELETE statement.

Although the above three methods are either update first or query first, they can still be used to solve the serious problem of read lock waiting in applications where query is relatively important (such as user login system).

In addition, MySQL also provides a compromise method to adjust read and write conflicts, that is, setting an appropriate value for the system parameter max_write_lock_count. When the read lock of a table reaches this value, MySQL will temporarily The priority of the write request is reduced, giving the reading process a certain chance to obtain the lock.

The problems and solutions caused by the write priority scheduling mechanism have been discussed above. Another point should be emphasized here: some query operations that require long running times will also "starve" the writing process! Therefore, you should try to avoid long-running query operations in your application. Don't always try to use a SELECT statement to solve the problem, because this seemingly clever SQL statement is often more complex and takes longer to execute. When possible, SQL statements can be "decomposed" to a certain extent by using intermediate tables and other measures so that each step of the query can be completed in a shorter time, thereby reducing lock conflicts. If complex queries are unavoidable, they should be scheduled to be executed during idle periods of the database. For example, some regular statistics can be scheduled to be executed at night.

InnoDB lock

The biggest differences between InnoDB and MyISAM are two points: one is that it supports transactions (TRANSACTION); the other is that it uses row-level locks. There are many differences between row-level locks and table-level locks. In addition, the introduction of transactions also brings some new problems. Let's first introduce some background knowledge, and then discuss the lock issue of InnoDB in detail.

1. Transaction (Transaction) and its ACID attributes

A transaction is a logical processing unit composed of a set of SQL statements. A transaction has the following four attributes, which are usually referred to as the ACID attributes of the transaction.

(Atomicity) Atomicity: A transaction is an atomic operation unit, and all modifications to the data are either executed or not executed at all.

(Consistent) Consistency: Data must remain consistent at the beginning and completion of a transaction. This means that all relevant data rules must be applied to transaction modifications to maintain data integrity; at the end of the transaction, all internal data structures (such as B-tree indexes or doubly linked lists) must also be correct.

(Isolation) Isolation: The database system provides a certain isolation mechanism to ensure that transactions are executed in an "independent" environment that is not affected by external concurrent operations. This means that intermediate states during a transaction are not visible to the outside world, and vice versa.

(Durable) Durability: After the transaction is completed, its modification to the data is permanent and can be maintained even if a system failure occurs.

Bank transfer is a typical example of a transaction.

2. Problems caused by concurrent transaction processing

Compared with serial processing, concurrent transaction processing can greatly increase the utilization of database resources and improve the transaction throughput of the database system, thereby supporting more users. However, concurrent transaction processing will also bring some problems, mainly including the following situations.

Lost Update: When two or more transactions select the same row and then update the row based on the originally selected value, since each transaction is not aware of the existence of the other transactions, an error occurs. A lost update problem occurs - the last update overwrites updates made by other transactions. For example, two editors make electronic copies of the same document. Each editor independently changes their copy and then saves the changed copy, overwriting the original document. The editor who last saved a copy of his or her changes overwrites the changes made by another editor. This problem can be avoided if one editor cannot access the same file until another editor completes and commits the transaction.

Dirty Reads: A transaction is modifying a record. Before the transaction is completed and committed, the data of this record is in an inconsistent state; at this time, another transaction also reads If the same record is not controlled and a second transaction reads the "dirty" data and performs further processing based on it, uncommitted data dependencies will occur. This phenomenon is vividly called "dirty reading".

Non-Repeatable Reads: A transaction reads the previously read data again at some time after reading some data, only to find that the data it read has changed. , or some records have been deleted! This phenomenon is called "non-repeatable reading".

Phantom Reads: A transaction re-reads previously retrieved data according to the same query conditions, but finds that other transactions have inserted new data that satisfies its query conditions. This phenomenon is called " phantom reading".

3. Transaction isolation level

Among the problems caused by concurrent transaction processing mentioned above, "update loss" should usually be completely avoided. However, preventing update loss cannot be solved by the database transaction controller alone. The application needs to add necessary locks to the data to be updated. Therefore, preventing update loss should be the responsibility of the application.

"Dirty read", "non-repeatable read" and "phantom read" are actually database read consistency problems, which must be solved by the database providing a certain transaction isolation mechanism. The ways in which databases implement transaction isolation can basically be divided into the following two types.

One is to lock the data before reading it to prevent other transactions from modifying the data.

The other is to generate a consistent data snapshot (Snapshot) of the data request time point through a certain mechanism without adding any locks, and use this snapshot to provide a certain level of consistency (statement level or transaction level) Read. From a user's perspective, it seems that the database can provide multiple versions of the same data. Therefore, this technology is called data multi-version concurrency control (MVCC or MCC for short), which is also often called a multi-version database.

Consistency read, also known as snapshot read. The MVCC mechanism is used to read the submitted data in undo. So its reading is non-blocking.

Consistency read must read data that has been submitted at a certain point in time. There is a special case: the data modified in this transaction, even the uncommitted data can be read in the later part of this transaction. arrive. Consistent read refers to an ordinary select statement without clauses such as for update, in share mode, etc. The submitted data in undo is used, and no locks are required (except MDL). The current read refers to the read performed by statements such as update, delete, select for update, select in share mode, etc. They read the latest data in the database and lock the read rows and gaps (RR isolation hour). If the lock cannot be obtained, it will wait until it is obtained or times out.

The stricter the transaction isolation of the database, the smaller the concurrent side effects, but the greater the price paid, because transaction isolation essentially makes transactions "serialized" to a certain extent, which is obviously inconsistent with " "Concurrency" is an oxymoron. At the same time, different applications have different requirements for read consistency and transaction isolation. For example, many applications are not sensitive to "non-repeatable reads" and "phantom reads" and may be more concerned about the ability to access data concurrently.

In order to solve the contradiction between "isolation" and "concurrency", ISO/ANSI SQL92 defines 4 transaction isolation levels. Each level has a different degree of isolation and allows different side effects. Applications can be based on their own Business logic requires that the contradiction between "isolation" and "concurrency" be balanced by selecting different isolation levels. Table 20-5 provides a good summary of the characteristics of these four isolation levels.

A detailed explanation of MySql row-level locks and table-level locks

Each specific database does not necessarily fully implement the above four isolation levels. Oracle only provides two standard isolation levels: Read committed and Serializable, and also provides its own defined Read only isolation level; SQL Server supports the above ISO/ANSI SQL92 In addition to the four defined isolation levels, it also supports an isolation level called "snapshot", but strictly speaking it is a Serializable isolation level implemented using MVCC.

MySQL supports all 4 isolation levels, but in specific implementation, there are some characteristics. For example, in some isolation levels, MVCC consistency reading is used, but in some cases it is not. These contents will be discussed later. This will be further introduced in the chapter.

4. Obtain the InnoDB row lock contention situation

You can analyze the row lock contention situation on the system by checking the InnoDB_row_lock status variable.

mysql> show status like 'innodb_row_lock%';

| Variable_name                                                                              

| InnoDB_row_lock_current_waits | 0 |
| InnoDB_row_lock_time | 0 |

| InnoDB_row_lock_time_avg | 0 |

| InnoDB_row_lock_time_max | 0 |

| InnoDB_row_lock_waits | 0 |

5 rows in set (0.01 sec)

If you find that the lock contention is serious, such as the values ​​​​of InnoDB_row_lock_waits and InnoDB_row_lock_time_avg are relatively high, you can also set InnoDB Monitors to further observe the tables and data where lock conflicts occur. Wait, and analyze the cause of lock contention.

The specific method is as follows:

mysql> CREATE TABLE innodb_monitor(a INT) ENGINE=INNODB;

Query OK, 0 rows affected (0.14 sec)

Then you can use the following statement to view:

mysql> Show innodb status\G;

The monitor can stop viewing by issuing the following statement:

mysql> ; DROP TABLE innodb_monitor;

Query OK, 0 rows affected (0.05 sec)

After setting up the monitor, in the display content of SHOW INNODB STATUS, there will be detailed information about the current lock waiting , including table name, lock type, lock record status, etc., to facilitate further analysis and problem determination. After opening the monitor, by default, the monitoring content will be recorded in the log every 15 seconds. If it is opened for a long time, the .err file will become very large. Therefore, after confirming the cause of the problem, the user must remember to delete the monitoring table to close it. monitor, or by starting the server using the "--console" option to turn off writing the log file.

5. InnoDB’s row lock mode and locking method

InnoDB implements the following two types of row locks.

Shared lock (S): allows one transaction to read a row and prevents other transactions from obtaining an exclusive lock on the same data set.

Exclusive lock (X): Allows transactions that obtain exclusive locks to update data, and prevents other transactions from obtaining shared read locks and exclusive write locks on the same data set.

In addition, in order to allow row locks and table locks to coexist and implement a multi-granularity locking mechanism, InnoDB also has two internally used intention locks (Intention Locks).

Both intention locks are It's a table lock.

Intention shared lock (IS): The transaction intends to add a row shared lock to the data row. The transaction must first obtain the IS lock of the table before adding a shared lock to a data row.

Intention exclusive lock (IX): The transaction intends to add a row exclusive lock to the data row. The transaction must first obtain the IX lock of the table before adding an exclusive lock to a data row.

A detailed explanation of MySql row-level locks and table-level locks

If the lock mode requested by a transaction is compatible with the current lock, InnoDB will grant the requested lock to the transaction; otherwise, if the two are not Compatible, the transaction will wait for the lock to be released.

Intention locks are automatically added by InnoDB and do not require user intervention.

The summary is as follows:

1. For UPDATE, DELETE and INSERT statements, InnoDB will automatically add exclusive locks (X) to the involved data sets;

2. For ordinary SELECT statement, InnoDB will not add any locks;

3. Transactions can add shared locks or exclusive locks to the recordset through the following statements.

Shared lock(S): SELECT * FROM table_name WHERE ... LOCK IN SHARE MODE.

Exclusive lock (X): SELECT * FROM table_name WHERE ... FOR UPDATE.

Use SELECT ... IN SHARE MODE to obtain a shared lock, which is mainly used to confirm whether a certain row of records exists when data dependencies are required, and to ensure that no one performs UPDATE or DELETE operations on this record.

However, if the current transaction also needs to update the record, it is likely to cause a deadlock. For applications that need to update the row record after locking it, you should use the SELECT... FOR UPDATE method to obtain exclusive Lock.

6. InnoDB row lock implementation method

InnoDB row lock is implemented by locking the index items on the index. This is different from MySQL and Oracle. The latter is implemented by locking the index items in the data block. This is achieved by locking the corresponding data rows.

InnoDB’s row lock implementation feature means that InnoDB will use row-level locks only when data is retrieved through index conditions. Otherwise, InnoDB will use table locks!

In practical applications, special attention should be paid to this feature of InnoDB row locks, otherwise it may cause a large number of lock conflicts, thus affecting concurrency performance.

(1) When querying without index conditions, InnoDB does use table locks instead of row locks.

(2) Since MySQL's row lock is a lock for the index, not for the record, although records of different rows are accessed,

but if the same index is used key, there will be a lock conflict. Please pay attention to this when designing your application.

(3) When the table has multiple indexes, different transactions can use different indexes to lock different rows.

In addition, whether using primary key index, unique index or ordinary index , InnoDB will use row locks to lock data.

(4) Even if an index field is used in the condition, whether to use the index to retrieve data is determined by MySQL by judging the cost of different execution plans. If MySQL believes that a full table scan is more efficient, such as For some very small tables, it will not use indexes. In this case, InnoDB will use table locks instead of row locks. Therefore, when analyzing lock conflicts, don't forget to check the SQL execution plan to confirm whether the index is actually used. For a detailed discussion of the circumstances under which MySQL does not use indexes, see the introduction in the "Index Issues" section of this chapter.

7. Gap lock (Next-Key lock)

When we use range conditions instead of equality conditions to retrieve data and request shared or exclusive locks, InnoDB will lock the index entries of existing data records that meet the conditions; for key values ​​that are within the condition range but do not exist The record is called "GAP", and InnoDB will also lock this "gap". This locking mechanism is the so-called gap lock (Next-Key lock). If there are only 101 records in the emp table, the empid values ​​are 1,2,...,100,101. The following SQL:

Select * from emp where empid > 100 for update;

is a range condition retrieval. InnoDB will not only lock the records with empid value 101 that meet the conditions, but also lock the "gaps" with empid greater than 101 (these records do not exist). The purpose of InnoDB using gap locks is, on the one hand, to prevent phantom reads and meet the requirements of relevant isolation levels. For the above example, if gap locks are not used, if other transactions insert any record with emid greater than 100, then if this transaction If the above statement is executed again, phantom reading will occur; on the other hand, it is to meet its recovery and replication needs. The impact of its recovery and replication on the lock mechanism, as well as InnoDB's use of gap locks under different isolation levels, will be further introduced in subsequent chapters.

Obviously, when using range conditions to retrieve and lock records, InnoDB's locking mechanism will block concurrent insertion of key values ​​within the qualified range, which often results in serious lock waits. Therefore, in actual application development, especially applications with a lot of concurrent insertions, we should try our best to optimize the business logic, try to use equal conditions to access update data, and avoid using range conditions.

Special note is that, in addition to using gap locks when locking through range conditions, InnoDB will also use gap locks if equal conditions are used to request a lock for a non-existent record!

The need for recovery and replication, the impact on the InnoDB lock mechanism

MySQL records the successful execution of INSERT, UPDATE, DELETE and other SQL statements to update data through BINLOG, and thus realizes the MySQL database Recovery and master-slave replication. MySQL's recovery mechanism (replication is actually continuous BINLOG-based recovery on Slave Mysql) has the following characteristics.

First, MySQL recovery is at the SQL statement level, that is, re-executing the SQL statement in BINLOG. This is different from the Oracle database, which is based on database file blocks.

Second, MySQL's BINLOG is recorded in the order in which transactions are submitted, and recovery is also performed in this order. This is also different from Oracle. Oracle restores data according to the System Change Number (SCN). When each transaction starts, Oracle will allocate a globally unique SCN. The order of the SCN and the time order of the transaction start are Consistent.

It can be seen from the above two points that MySQL's recovery mechanism requires that before a transaction is submitted, other concurrent transactions cannot insert any records that meet its locking conditions, that is, phantom reads are not allowed. This has exceeded The ISO/ANSI SQL92 "repeatable read" isolation level requirement actually requires transactions to be serialized.

In addition, for SQL statements such as "insert into target_tab select * from source_tab where ..." and "create table new_tab ...select ... From source_tab where ...(CTAS)", users There is no update operation done on source_tab, but MySQL does special processing for this kind of SQL statement.

(Here InnoDB adds a shared lock to source_tab and does not use multi-version data consistency reading technology!)
A detailed explanation of MySql row-level locks and table-level locks

In the above example, simply Reading the data in the source_tab table is equivalent to executing an ordinary SELECT statement. Just use consistent reading. ORACLE does exactly this. It uses multi-version data implemented by MVCC technology to achieve consistent reading without adding any locks to source_tab. We know that InnoDB also implements multi-version data, and does not require any locks for ordinary SELECT consistent reading; but here InnoDB adds a shared lock to source_tab and does not use multi-version data consistent reading technology!

Why does MySQL do this? The reason is to ensure the correctness of recovery and replication. Because without locking, if other transactions update source_tab during the execution of the above statement, it may lead to incorrect data recovery results. To demonstrate this, let's repeat the previous example. The difference is that before session_1 executes the transaction, the value of the system variable innodb_locks_unsafe_for_binlog is set to "on" (its default value is off)

A detailed explanation of MySql row-level locks and table-level locks

A detailed explanation of MySql row-level locks and table-level locks

A detailed explanation of MySql row-level locks and table-level locks

As can be seen from the above, after setting the value of the system variable innodb_locks_unsafe_for_binlog to "on", InnoDB no longer locks the source_tab, and the result is in line with the application logic. However, if the content of BINLOG is analyzed:

SET TIMESTAMP=1169175130 ;

BEGIN;

# at 274

#070119 10:51:57 server id 1 end_log_pos 105 Query thread_id=1 exec_time=0 error_code=0

SET TIMESTAMP=1169175117;

update source_tab set name = '8' where name = '1';

# at 379

#070119 10:52:10 server id 1 end_log_pos 406 error_code=0

SET TIMESTAMP=1169175134;

BEGIN;

# at 474

#070119 10:51:29 server id 1 end_log_pos 119 Query thread_id=2 exec_time=0 error_code=0

SET TIMESTAMP=1169175089;

insert into target_tab select d1,name from source_tab where name = '1';

# at 593

#070119 10:52:14 server id 1 end_log_pos 620 Xid = 7

COMMIT;

It can be found that in BINLOG, the location of the update operation is INSERT. ..Before SELECT, if you use this BINLOG for database recovery, the recovery results will not match the actual application logic; if you copy, it will lead to inconsistency between the master and slave databases!

Therefore, the INSERT...SELECT... and CREATE TABLE...SELECT... statements may prevent concurrent updates to the source table, causing waiting for the source table lock. If the query is complex, it will cause serious performance problems, and we should try to avoid using it in our applications. In fact, MySQL calls this kind of SQL non-deterministic SQL and is not recommended.

If this kind of SQL must be used to implement business logic in the application, and you do not want to affect the concurrent updates of the source table, you can take the following two measures:

First, take the above example As in, set the value of innodb_locks_unsafe_for_binlog to "on" to force MySQL to use multi-version data consistency reading. However, the price paid is that the data may not be restored or copied correctly using binlog, so this method is not recommended.

The second is to achieve it indirectly by using a combination of "select * from source_tab ... Into outfile" and "load data infile ..." statements. In this way, MySQL will not lock source_tab.

8. Differences in InnoDB’s consistent reading and locking under different isolation levels

As mentioned earlier, locks and multi-version data are the key to InnoDB’s implementation of consistent reading and ISO/ANSI SQL92 isolation levels. Therefore, under different isolation levels, the consistent read strategy and required locks used by InnoDB when processing SQL are different. At the same time, the characteristics of data recovery and replication mechanisms also have a great impact on some SQL consistent read strategies and lock strategies. These characteristics are summarized as shown in Table 20-16 for the convenience of readers.

1: When the isolation level is RC, gap locks are not used. The official documentation explains as follows:

Each consistent read, even within the same transaction, sets and reads its own fresh snapshot. For information about consistent reads, see Section 14.8.2.3, “Consistent Nonlocking Reads”.

For locking reads (SELECT with FOR UPDATE or LOCK IN SHARE MODE), UPDATE statements, and DELETE statements, InnoDB locks only index records, not the gaps before them, and thus permits the free insertion of new records next to locked records. Gap locking is only used for foreign-key constraint checking and duplicate-key checking.

Official Document The description address is: https://dev.mysql.com/doc/refman/5.5/en/innodb-transaction-isolation-levels.html

2: Under the repeatable read isolation level, if the index is Unique, and the search is also unique, no gap lock is needed, otherwise gap lock will be used. The official description is as follows:

REPEATABLE READ

This is the default isolation level for InnoDB. Consistent reads within the same transaction read the snapshot established by the first read. This means that if you issue several plain (nonlocking) SELECT statements within the same transaction, theseSELECT statements are consistent also with respect to each other. See Section 14.8.2.3, “Consistent Nonlocking Reads”.

For locking reads (SELECT with FOR UPDATE or LOCK IN SHARE MODE), UPDATE, and DELETE statements, locking depends on whether the statement uses a unique index with a unique search condition, or a range -type search condition.

For a unique index with a unique search condition, InnoDB locks only the index record found, not the gap before it.

For other search conditions, InnoDB locks the index range scanned, using gap locks or next-key locks to block insertions by other sessions into the gaps covered by the range. For information about gap locks and next-key locks, see Section 14.8.1, “InnoDB Locking”.

The official document description address is: https://dev.mysql.com/doc/refman/5.5/en/innodb-transaction-isolation-levels.html

9. When to use table locks?

For InnoDB tables, row-level locks should be used in most cases, because transactions and row locks are often the reasons why we choose InnoDB tables. However, in individual special transactions, table-level locks can also be considered.

The first situation is: the transaction needs to update most or all of the data, and the table is relatively large. If the default row lock is used, not only will the transaction execution efficiency be low, but it may also cause other transactions to wait for a long time for locks and Lock conflict, in this case, you can consider using table locks to improve the execution speed of the transaction.

The second situation is: the transaction involves multiple tables and is relatively complex, which is likely to cause deadlock and cause a large number of transaction rollbacks. In this case, you can also consider locking the tables involved in the transaction once to avoid deadlock and reduce the database overhead caused by transaction rollback.

Of course, there should not be too many of these two types of transactions in the application, otherwise, you should consider using MyISAM tables.

Under InnoDB, pay attention to the following two points when using table locks.

(1) Although you can add table-level locks to InnoDB using LOCK TABLES, it must be noted that table locks are not managed by the InnoDB storage engine layer, but by the upper layer: MySQL Server. , only when autocommit=0, innodb_table_locks=1 (default setting), the InnoDB layer can know the table lock added by MySQL, and MySQL Server can also perceive the row lock added by InnoDB. In this case, InnoDB can automatically identify the table-level lock involved. deadlock; otherwise, InnoDB will not be able to automatically detect and handle this deadlock. Regarding deadlock, we will continue to discuss it in the next section.

(2) When using LOCK TABLES to lock the InnoDB table, be careful to set AUTOCOMMIT to 0, otherwise MySQL will not lock the table; do not use UNLOCK TABLES to release the table lock before the transaction ends. Because UNLOCK TABLES will implicitly commit the transaction; COMMIT or ROLLBACK cannot release the table-level lock added with LOCK TABLES, and the table lock must be released with UNLOCK TABLES. For the correct method, see the following statement.

For example, if you need to write to table t1 and read from table t, you can do as follows:

SET AUTOCOMMIT=0;

LOCK TABLES t1 WRITE, t2 READ, ...;

[do something with tables t1 and t2 here];

COMMIT;

UNLOCK TABLES;

10. About deadlock

As mentioned above, MyISAM table locks are deadlock free. This is because MyISAM always obtains all the locks it needs at one time, either to satisfy them all or to wait, so there will be no deadlocks. Lock. But in InnoDB, except for transactions composed of a single SQL, locks are acquired gradually, which determines that deadlock is possible in InnoDB. Table 20-17 shows an example of a deadlock.

A detailed explanation of MySql row-level locks and table-level locks

In the above example, both transactions need to obtain the exclusive lock held by the other party to continue completing the transaction. This kind of cyclic lock waiting is a typical deadlock.

After a deadlock occurs, InnoDB can generally automatically detect it and cause one transaction to release the lock and roll back, while another transaction obtains the lock and continues to complete the transaction. However, when external locks or table locks are involved, InnoDB cannot completely automatically detect deadlocks. This needs to be solved by setting the lock wait timeout parameter innodb_lock_wait_timeout. It should be noted that this parameter is not only used to solve the deadlock problem. When concurrent access is relatively high, if a large number of transactions are suspended because the required locks cannot be obtained immediately, it will occupy a large amount of computer resources and cause serious performance problems. The problem even drags across databases. We can avoid this situation by setting an appropriate lock wait timeout threshold.

Generally speaking, deadlocks are problems of application design. By adjusting business processes, database object design, transaction size, and SQL statements that access the database, most deadlocks can be avoided.

The following will introduce several common methods to avoid deadlock through examples.

(1) In an application, if different programs access multiple tables concurrently, try to agree to access the tables in the same order, which can greatly reduce the chance of deadlock. In the following example, because the two sessions access the two tables in different orders, the chance of deadlock is very high! But if accesses are done in the same order, deadlock can be avoided.

A detailed explanation of MySql row-level locks and table-level locks

(2) When the program processes data in batches, if the data is sorted in advance to ensure that each thread processes records in a fixed order, it can also be greatly reduced. Deadlock is possible.

A detailed explanation of MySql row-level locks and table-level locks

(3) In a transaction, if you want to update a record, you should directly apply for a lock of sufficient level, that is, an exclusive lock, instead of applying for a shared lock first and then update it again. Apply for an exclusive lock, because when the user applies for an exclusive lock, other transactions may have obtained the shared lock of the same record, causing lock conflicts or even deadlocks. For specific demonstration, please refer to the example in Section 20.3.3.

(4) As mentioned before, under the REPEATABLE-READ isolation level, if two threads use SELECT...FOR UPDATE to add exclusive locks to records with the same condition at the same time, if there is no record that meets the condition, the two threads will All will be locked successfully. The program discovers that the record does not yet exist and attempts to insert a new record. If both threads do this, a deadlock will occur. In this case, changing the isolation level to READ COMMITTED can avoid the problem.

A detailed explanation of MySql row-level locks and table-level locks

(5) When the isolation level is READ COMMITTED, if both threads execute SELECT...FOR UPDATE first, determine whether there are records that meet the conditions. If not, , insert the record. At this time, only one thread can insert successfully, and the other thread will wait for a lock. When the first thread submits, the second thread will make an error due to the primary key. However, although this thread makes an error, it will obtain an exclusive lock! At this time, if a third thread applies for an exclusive lock, a deadlock will also occur.

In this case, you can perform the insertion operation directly, and then catch the primary key duplicate exception, or always execute ROLLBACK to release the acquired exclusive lock when encountering a primary key duplicate error.

Although deadlocks can be greatly reduced through the design and SQL optimization and other measures introduced above, deadlocks are difficult to completely avoid. Therefore, it is a good programming habit to always catch and handle deadlock exceptions in programming.

If a deadlock occurs, you can use the SHOW INNODB STATUS command to determine the cause of the last deadlock. The returned results include detailed information about deadlock-related transactions, such as the SQL statement that caused the deadlock, the locks that the transaction has acquired, what locks it is waiting for, and the transactions that were rolled back. Based on this, the causes of deadlock and improvement measures can be analyzed.

The following is a sample output of SHOW INNODB STATUS:

mysql> show innodb status \G

InnoDB Summary

This chapter focuses on MyISAM in MySQL The implementation characteristics of table-level locks and InnoDB row-level locks are discussed, and lock problems and solutions often encountered by the two storage engines are discussed.

For MyISAM table locks, the following points are mainly discussed:

(1) Shared read locks (S) are compatible, but shared read locks (S) are not compatible with exclusive writes The locks (X) and the exclusive write locks (X) are mutually exclusive, which means that reading and writing are serial.

(2) Under certain conditions, MyISAM allows queries and inserts to be executed concurrently. We can use this to solve the lock contention problem for queries and inserts into the same table in applications.

(3) MyISAM's default lock scheduling mechanism is write priority, which is not necessarily suitable for all applications. Users can adjust reading and writing by setting the LOW_PRIORITY_UPDATES parameter or specifying the LOW_PRIORITY option in the INSERT, UPDATE, and DELETE statements. Lock contention.

(4) Since the locking granularity of the table lock is large and the reading and writing are serial, if there are many update operations, the MyISAM table may have serious lock waits. You can consider using the InnoDB table. to reduce lock conflicts.

For InnoDB tables, this chapter mainly discusses the following contents.

InnoDB's row lock is based on lock index. If the data is not accessed through the index, InnoDB will use table lock.

Introduces the InnoDB gap lock (Next-key) mechanism and the reasons why InnoDB uses gap locks.

Under different isolation levels, InnoDB’s locking mechanism and consistent read strategy are different.

MySQL recovery and replication also have a great impact on the InnoDB lock mechanism and consistent read strategy.

Lock conflicts and even deadlocks are difficult to completely avoid.

After understanding the InnoDB lock characteristics, users can reduce lock conflicts and deadlocks through design and SQL adjustments, including:

Use a lower isolation level as much as possible;

Carefully design the index, and try to use the index to access data to make locking more accurate, thereby reducing the chance of lock conflicts.

Choose a reasonable transaction size, and the probability of lock conflicts for small transactions is smaller.

When locking the record set display, it is best to request a sufficient level of lock at one time. For example, if you want to modify data, it is best to apply for an exclusive lock directly instead of applying for a shared lock first and then request an exclusive lock when modifying, which can easily lead to deadlock.

When different programs access a group of tables, they should try to agree to access each table in the same order. For a table, try to access the rows in the table in a fixed order. This greatly reduces the chance of deadlock.

Try to use equal conditions to access data, so as to avoid the impact of gap locks on concurrent insertion.

Do not apply for a lock level that exceeds the actual need; unless necessary, do not display locking when querying.

For some specific transactions, table locks can be used to increase processing speed or reduce the possibility of deadlock.

The above is the detailed content of A detailed explanation of MySql row-level locks and table-level locks. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:segmentfault.com. If there is any infringement, please contact admin@php.cn delete