Author: | peter |
---|---|
url: | http://www.mysqlperformanceblog.com/2006/07/12/insert-into-select-performance-with-innodb-tables/ |
Everyone using Innodb tables probably got use to the fact Innodb tables perform non locking reads, meaning unless you use some modifiers such as LOCK IN SHARE MODE or FOR UPDATE, SELECT statements will not lock any rows while running.
This is generally correct, however there a notable exception - INSERT INTO table1 SELECT * FROM table2. This statement will perform locking read (shared locks) for table2 table. It also applies to similar tables with where clause and joins. It is important for tables which is being read to be Innodb - even if writes are done in MyISAM table.
So why was this done, being pretty bad for MySQL Performance and concurrency ? The reason is - replication. In MySQL before 5.1 replication is statement based which means statements replied on the master should cause the same effect as on the slave. If Innodb would not locking rows in source table other transaction could modify the row and commit before transaction which is running INSERT .. SELECT statement. This would make this transaction to be applied on the slave before INSERT... SELECT statement and possibly result in different data than on master. Locking rows in the source table while reading them protects from this effect as other transaction modifies rows before INSERT ... SELECT had chance to access it it will also be modified in the same order on the slave. If transaction tries to modify the row after it was accessed and so locked by INSERT ... SELECT, transaction will have to wait until statement is completed to make sure it will be executed on the slave in proper order. Gets pretty complicated ? Well all you need to know it had to be done fore replication to work right in MySQL before 5.1.
In MySQL 5.1 this as well as few other problems should be solved by row based replication. I'm however yet to give it real stress tests to see how well it performs :)
One more thing to keep into account - INSERT ... SELECT actually performs read in locking mode and so partially bypasses versioning and retrieves latest committed row. So even if you're operation in REPEATABLE-READ mode, this operation will be performed in READ-COMMITTED mode, potentially giving different result compared to what pure SELECT would give. This by the way applies to SELECT .. LOCK IN SHARE MODE and SELECT ... FOR UPDATE as well.
One my ask what is if I'm not using replication and have my binary log disabled ? If replication is not used you can enable innodb_locks_unsafe_for_binlog option, which will relax locks which Innodb sets on statement execution, which generally gives better concurrency. However as the name says it makes locks unsafe fore replication and point in time recovery, so use innodb_locks_unsafe_for_binlog option with caution.
Note disabling binary logs is not enough to trigger relaxed locks. You have to set innodb_locks_unsafe_for_binlog=1 as well. This is done so enabling binary log does not cause unexpected changes in locking behavior and performance problems. You also can use this option with replication sometimes, if you really know what you're doing. I would not recommend it unless it is really needed as you might not know which other locks will be relaxed in future versions and how it would affect your replication.
So what are safe workarounds if you're using replication ?
The most general one is to use: PLAIN TEXT SQL:
- SELECT * FROM tbl1 INFO OUTFILE '/tmp/tbl1.txt';
- LOAD DATA INFILE '/tmp/tbl1.txt' INTO TABLE tbl2;
instead of: PLAIN TEXT SQL:
- INSERT INTO tbl2 SELECT * FROM tbl1;
INSERT ... INTO OUTFILE does not have to set extra locks.
If you use this aproach make sure to delete file after it is loaded back (it has to be done outside of MySQL Server) as otherwise the script will fail second time.
If you need result to be even closer to one of INSERT ... SELECT you may execute this transaction in READ-COMMITTED isolation mode.
Other workarounds are less general purpose. For example if you're doing batch processing which is well indexed you might chop transactions and process rows by small bulks, which do not cause long enough locks to cause the problems.
To complete this article I should show how wait caused by this statement will look in SHOW INNODB STATUS:
<pre class="brush:php;toolbar:false">TRANSACTION 0 42304626, ACTIVE 14 sec, process no 29895, OS thread id 2894768 updating or deleting<br />mysql tables in use 1, locked 1<br />LOCK WAIT 3 lock struct(s), heap size 320, undo log entries 1<br />MySQL thread id 1794760, query id 6994946 localhost root Updating<br />update sample set j=0 where i=5<br />TRX HAS BEEN WAITING 14 SEC FOR THIS LOCK TO BE GRANTED:<br />RECORD LOCKS space id 0 page no 33504 n bits 328 index `j` of table `test/sample` trx id 0 42304626 lock_mode X locks rec but not gap waiting<br />Record lock, heap no 180 PHYSICAL RECORD: n_fields 2; compact format; info bits 0<br />0: len 30; hex 306338386465646233353863643936633930363962373361353736383261; asc 0c88dedb358cd96c9069b73a57682a;...(truncated); 1: len 4; hex 00000005; asc ;;<br /><br />TRANSACTION 0 42304624, ACTIVE 37 sec, process no 29895, OS thread id 4058032 fetching rows, thread declared inside InnoDB 3<br />mysql tables in use 1, locked 1<br />2539 lock struct(s), heap size 224576<br />MySQL thread id 1794751, query id 6994931 localhost root Sending data<br />insert into test select * from sample
As you can see INSERT... SELECT has a lot of lock structs, which means it has locked a lot of rows. "fetching rows" of course means it is still going. In this case write is done to MyISAM table so we'll not see any write activity.
Other transaction which happes to be simple primary key update is waiting on sample table for this record to be unlocked.

MySQL'sBLOBissuitableforstoringbinarydatawithinarelationaldatabase,whileNoSQLoptionslikeMongoDB,Redis,andCassandraofferflexible,scalablesolutionsforunstructureddata.BLOBissimplerbutcanslowdownperformancewithlargedata;NoSQLprovidesbetterscalabilityand

ToaddauserinMySQL,use:CREATEUSER'username'@'host'IDENTIFIEDBY'password';Here'showtodoitsecurely:1)Choosethehostcarefullytocontrolaccess.2)SetresourcelimitswithoptionslikeMAX_QUERIES_PER_HOUR.3)Usestrong,uniquepasswords.4)EnforceSSL/TLSconnectionswith

ToavoidcommonmistakeswithstringdatatypesinMySQL,understandstringtypenuances,choosetherighttype,andmanageencodingandcollationsettingseffectively.1)UseCHARforfixed-lengthstrings,VARCHARforvariable-length,andTEXT/BLOBforlargerdata.2)Setcorrectcharacters

MySQloffersechar, Varchar, text, Anddenumforstringdata.usecharforfixed-Lengthstrings, VarcharerForvariable-Length, text forlarger text, AndenumforenforcingdataAntegritywithaetofvalues.

Optimizing MySQLBLOB requests can be done through the following strategies: 1. Reduce the frequency of BLOB query, use independent requests or delay loading; 2. Select the appropriate BLOB type (such as TINYBLOB); 3. Separate the BLOB data into separate tables; 4. Compress the BLOB data at the application layer; 5. Index the BLOB metadata. These methods can effectively improve performance by combining monitoring, caching and data sharding in actual applications.

Mastering the method of adding MySQL users is crucial for database administrators and developers because it ensures the security and access control of the database. 1) Create a new user using the CREATEUSER command, 2) Assign permissions through the GRANT command, 3) Use FLUSHPRIVILEGES to ensure permissions take effect, 4) Regularly audit and clean user accounts to maintain performance and security.

ChooseCHARforfixed-lengthdata,VARCHARforvariable-lengthdata,andTEXTforlargetextfields.1)CHARisefficientforconsistent-lengthdatalikecodes.2)VARCHARsuitsvariable-lengthdatalikenames,balancingflexibilityandperformance.3)TEXTisidealforlargetextslikeartic

Best practices for handling string data types and indexes in MySQL include: 1) Selecting the appropriate string type, such as CHAR for fixed length, VARCHAR for variable length, and TEXT for large text; 2) Be cautious in indexing, avoid over-indexing, and create indexes for common queries; 3) Use prefix indexes and full-text indexes to optimize long string searches; 4) Regularly monitor and optimize indexes to keep indexes small and efficient. Through these methods, we can balance read and write performance and improve database efficiency.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

WebStorm Mac version
Useful JavaScript development tools

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

Notepad++7.3.1
Easy-to-use and free code editor
