Home  >  Article  >  Database  >  Let’s talk about MySQL’s infrastructure and logging system

Let’s talk about MySQL’s infrastructure and logging system

青灯夜游
青灯夜游forward
2022-07-06 19:33:251464browse

This article will introduce you to the relevant knowledge of MySQL and talk about the MySQL infrastructure and logging system in depth. I hope it will be helpful to you!

Let’s talk about MySQL’s infrastructure and logging system

1. MySQL infrastructure

Let’s talk about MySQL’s infrastructure and logging system## MySQL can be divided into two parts: Server layer and storage engine layer

The Server layer includes connectors, query caches, analyzers, optimizers, executors, etc., covering most of MySQL's core service functions, as well as all built-in functions (such as date, time, mathematical and encryption functions, etc. ), all cross-storage engine functions are implemented in this layer, such as stored procedures, triggers, views, etc.

The storage engine is responsible for data storage and retrieval. Its architectural model is plug-in and supports multiple storage engines such as InnoDB, MyISAM, and Memory. The most commonly used storage engine now is InnoDB, which has become the default storage engine since MySQL 5.5.5. You can specify the use of memory engine execution by using engin=memory in the SQL statement

Different storage engines share a Server layer

1. Connector

The connector is responsible for establishing a connection with the client, obtaining permissions, maintaining and managing the connection. The connection command is generally:

mysql -h$ip -P$port -u$user -p
The mysql in the connection command is a client tool used to establish a connection with the server. After completing the TCP handshake, the connector will start to authenticate the identity

    If the username or password is incorrect, an "Access denied for user" error will be received, and then the client program will end execution
  • If the username and password authentication is passed, the connector returns to the permissions table to find out the permissions you have. Afterwards, the permission judgment logic in this connection will depend on the permissions read at this time
This means that after a user successfully establishes a connection, even if the administrator account is used to access the user's Even if the permissions are modified, it will not affect the permissions of existing connections. After the modification is completed, only newly created connections will use the new permission settings

After the connection is completed, if you have no subsequent actions, the connection will be in an idle state, which can be seen in the show processlist command

Let’s talk about MySQL’s infrastructure and logging system

Command is Sleep, indicating that this connection is an idle connection

If the client does not move for too long, the connector will automatically disconnect it. This time is controlled by the parameter wait_timeout. The default value is 8 hours

If the client sends a request again after the connection is disconnected, it will receive an error message: Lost connection to MySQL server during query. At this time, you need to reconnect, and then execute the request. In the database, a long connection means that after the connection is successful, if the client continues to make requests, the same connection will always be used. Short connection refers to disconnecting the connection after executing a few queries each time, and re-establishing a connection for the next query. The process of establishing a connection is usually more complicated, so it is recommended to use long connections as much as possible.

##But after all long connections are used, sometimes the memory occupied by MySQL increases very quickly. This is because the memory temporarily used by MySQL during execution is managed in the connection object. These resources will be released when the connection is disconnected. Therefore, if long connections accumulate, they may occupy too much memory and be forcibly killed by the system (OOM). Judging from the phenomenon, MySQL restarts abnormally

You can solve this problem through the following two solutions:

1. Disconnect long connections regularly. After using it for a period of time, or after the program determines that a large query that occupies memory has been executed, disconnect the connection, and then reconnect after querying.

2. If you are using MySQL5.7 or newer version, you can After performing a relatively large operation for the first time, reinitialize the connection resource by executing mysql_reset_connection. This process does not require reconnection and permission verification, but the connection will be restored to the state when it was just created

2. Query cache

After the connection is established , you can execute the select statement. After MySQL gets a query request, it will first go to the query cache to see if this statement has been executed before. Previously executed statements and their results may be cached directly in memory in the form of key-value pairs. The key is the query statement, and the value is the query result. If the query can directly find the key in this cache, then the value will be returned directly to the client

If the statement is not in the query cache, the subsequent execution phase will continue. After the execution is completed, the execution results will be stored in the query cache. If the query hits the cache, MySQL can directly return the result without performing subsequent complex operations. This is very efficient. However, in most cases it is not recommended to use the query cache because the query cache fails very frequently. For updates to a table, all query caches on this table will be cleared. For databases with heavy update pressure, the hit rate of the query cache will be very low

可以将参数query_cache_type设置成DEMAND,这样对于默认的SQL语句都不使用查询缓存。而对于确定要是查询缓存的语句,可以用SQL_CACHE显示指定,如下面这条语句一样:

select SQL_CACHE * from T where ID=10;

MySQL8.0版本直接将查询缓存的整块功能删掉了

3、分析器

如果没有命中查询缓存,就要开始真正执行语句了。MySQL首先要对SQL语句做解析

分析器会先做词法分析。输入的是由多个字符串和空格组成的一条SQL语句,MySQL需要识别出里面的字符串分别是什么,代表什么

select * from T where ID=10;

MySQL从输入的select这个关键字识别出来,这是一个查询语句。它也要把字符串T识别成表名T,把字符串ID识别成列ID

做完了这些识别以后,就要做语法分析。根据词法分析的结果,语法分析器会根据语法规则,判断这个SQL语句是否满足MySQL语法。如果语法不对,就会收到"You have an error in your SQL syntax"的错误提示

4、优化器

经过了分析器,在开始执行之前,还要先经过优化器的处理

优化器是在表里面有多个索引的时候,决定使用哪个索引;或者在一个语句有多表关联的时候,决定各个表的连接顺序

5、执行器

优化器阶段完成后,这个语句的执行方案就确定下来了,然后进入执行器阶段,开始执行语句

开始执行的时候,要先判断一下你对这个表T有没有执行查询的权限,如果没有,就会返回没有权限的错误,如下所示

mysql> select * from T where ID=10;
ERROR 1142 (42000): SELECT command denied to user 'b'@'localhost' for table 'T'

如果有权限,就打开表继续执行。打开表的时候,执行器就会根据表的引擎定义,去使用这个引擎提供的接口

比如在表T中,ID字段没有索引,那么执行器的执行流程是这样的:

1.调用InnoDB引擎接口取这个表的第一行,判断ID值是不是10,如果不是则跳过,如果是则将这个行存在结果集中

2.调用引擎接口取下一行,重复相同的判断逻辑,直到取到这个表的最后一行

3.执行器将上述遍历过程中所有满足条件的行组成的记录集作为结果集返回给客户端

在数据库的慢查询日志中看到一个rows_examined的字段,表示这个语句执行过程扫描了多少行。这个值就是在执行器每次调用引擎获取数据行的时候累加的

在有些场景下,执行器调用一次,在引起内部则扫描了多行,因此引擎扫描行数跟rows_examined并不是完全相同的

二、日志系统

表T的创建语句如下,这个表有一个主键ID和一个整型字段c:

create table T(ID int primary key, c int);

如果要将ID=2这一行的值加1,SQL语句如下:

update T set c=c+1 where ID=2;

1、redo log(重做日志)

在MySQL中,如果每次的更新操作都需要写进磁盘,然后磁盘也要找到对应的那条记录,然后再更新,整个过程IO成本、查找成本都很高。MySQL里常说的WAL技术,全称是Write-Ahead Logging,它的关键点就是先写日志,再写磁盘

当有一条记录需要更新的时候,InnoDB引擎就会把记录写到redo log里面,并更新buffer pool的page,这个时候更新就算完成了

buffer pool是物理页的缓存,对InnoDB的任何修改操作都会首先在buffer pool的page上进行,然后这样的页面将被标记为脏页并被放到专门的flush list上,后续将由专门的刷脏线程阶段性的将这些页面写入磁盘

InnoDB的redo log是固定大小的,比如可以配置为一组4个文件,每个文件的大小是1GB,从头开始写,写到末尾就又回到开头循环写

Let’s talk about MySQL’s infrastructure and logging system
write pos是当前记录的位置,一边写一边后移,写到第3号文件末尾后就回到0号文件开头。check point是当前要擦除的位置,也是往后推移并且循环的,擦除记录前要把记录更新到数据文件

write pos和check point之间空着的部分,可以用来记录新的操作。如果write pos追上check point,这时候不能再执行新的更新,需要停下来擦掉一些记录,把check point推进一下

有了redo log,InnoDB就可以保证即使数据库发生异常重启,之前提交的记录都不会丢失,这个能力称为crash-safe

2、binlog(归档日志)

MySQL整体来看就有两块:一块是Server层,主要做的是MySQL功能层面的事情;还有一块是引擎层,负责存储相关的具体事宜。redo log是InnoDB引擎特有的日志,而Server层也有自己的日志,称为binlog

为什么会有两份日志?

Because there was no InnoDB engine in MySQL at the beginning. The engine that comes with MySQL is MyISAM, but MyISAM does not have crash-safe capabilities, and binlog logs can only be used for archiving. InnoDB is introduced into MySQL in the form of a plug-in. Since relying only on binlog does not have crash-safe capabilities, InnoDB uses redo log to implement crash-safe capabilities.

binlog log format:

There are three formats of binlog: STATEMENT, ROW, MIXED

1), STATEMENT mode

The original text of the SQL statement is recorded in the binlog. The advantage is that there is no need to record data changes in each row, which reduces the amount of binlog logs, saves IO, and improves performance. The disadvantage is that in some cases, the data in the master-slave will be inconsistent (such as sleep() function, last_insert_id(), and user-defined functions (udf), etc. will cause problems)

2 ), ROW mode

does not record the context information of each SQL statement, only records which data has been modified and what the modification is. And there will be no problem that the calls and triggers of stored procedures or functions or triggers cannot be copied correctly under certain circumstances. The disadvantage is that a large amount of logs will be generated, especially when altering table, the logs will skyrocket

3), MIXED mode

The mixed use of the above two modes is generally For replication, use STATEMENT mode to save binlog. For operations that cannot be copied in STATEMENT mode, use ROW mode to save binlog. MySQL will select the log saving method according to the executed SQL statement.

3, the difference between redo log and binlog log

1.redo log is unique to the InnoDB engine; binlog is implemented by the Server layer of MySQL and can be used by all engines

2.redo log is a physical log that records What modifications have been made to a certain data? Binlog is a logical log, which records the original logic of this statement. For example, add 1 to the c field of the row with ID=2

3. The redo log is written in a loop Yes, the space will definitely be used up; binlog can be written additionally. After the binlog file reaches a certain size, it will switch to the next one and will not overwrite the previous log

4. Two-stage submission

The internal process of the executor and the InnoDB engine when executing this update statement:

1. The executor first finds the engine and takes the line ID=2. ID is the primary key, and the engine directly uses tree search to find this row. If the data in the row with ID=2 is already in the memory, it will be returned directly to the executor; otherwise, it needs to be read into the memory from the disk and then returned.

2. The executor gets the engine to row data, add 1 to this value to get a new row of data, and then call the engine interface to write this new row of data

3. The engine updates this new row of data into the memory and updates this The operation is recorded in the redo log. At this time, the redo log is in the prepare state. Then inform the executor that the execution is completed and you can submit the transaction at any time

4. The executor generates the binlog of this operation and writes the binlog to the disk

5. The executor calls the engine's commit transaction interface , the engine changes the redo log just written to the submitted state, and the update is completed. The execution flow chart of the

update statement is as follows. The light box in the figure indicates that it is executed inside InnoDB, and the dark box indicates that it is executed in the executor The

Let’s talk about MySQL’s infrastructure and logging system

# executed in splits the writing of redo log into two steps: prepare and commit, which is a two-stage commit

Since redo log and Binlog is two independent logics. If two-stage submission is not required, either write the redo log first and then write the binlog, or write the binlog first and then write the redo log

1. Write the redo log first and then write the binlog . If the MySQL process restarts abnormally when the redo log has been written but the binlog has not yet been written. Since after the redo log is written, even if the system crashes, the data can still be recovered, so the value of c in this line after recovery is 1. However, since the binlog crashed before it was finished, this statement was not recorded in the binlog at this time. The value of c in this line recorded in the binlog is 0

2. Write the binlog first and then the redo log. If there is a crash after the binlog is written, since the redo log has not been written yet, the transaction will be invalid after the crash recovery, so the value of c in this line is 0. But the binlog has already recorded the log of changing c from 0 to 1. Therefore, when binlog is restored later, one more transaction will come out, and the value of c in the restored row is 1

If two-phase commit is not used, the status of the database may be different from that of using it. The status of the library recovered from the log is inconsistent. Both redo log and binlog can be used to represent the commit status of a transaction, and two-stage commit is to keep the two states logically consistent

redo log is used to ensure crash-safe capabilities. When the innodb_flush_log_at_trx_commit parameter is set to 1, it means that the redo log of each transaction is directly persisted to the disk, which can ensure that data will not be lost after MySQL restarts abnormally.

When the sync_binlog parameter is set to 1, it means that every transaction The binlog of each transaction is persisted to disk, which ensures that the binlog will not be lost after MySQL restarts abnormally

三、MySQL刷脏页

1、刷脏页的场景

当内存数据页跟磁盘数据页不一致的时候,我们称这个内存页为脏页。内存数据写入到磁盘后,内存和磁盘行的数据页的内容就一致了,称为干净页

  • 第一种场景是,InnoDB的redo log写满了,这时候系统会停止所有更新操作,把checkpoint往前推进,redo log留出空间可以继续写
    Let’s talk about MySQL’s infrastructure and logging system
    checkpoint位置从CP推进到CP’,就需要将两个点之间的日志对应的所有脏页都flush到磁盘上。之后,上图中从write pos到CP’之间就是可以再写入的redo log的区域

  • 第二种场景是,系统内存不足。当需要新的内存页,而内存不够用的时候,就要淘汰一些数据页,空出内存给别的数据页使用。如果淘汰的是脏页,就要先将脏页写到磁盘

这时候不能直接把内存淘汰掉,下次需要请求的时候,从磁盘读入数据页,然后拿redo log出来应用不就行了?

这里是从性能考虑的。如果刷脏页一定会写盘,就保证了每个数据页有两种状态:一种是内存里存在,内存里就肯定是正确的结果,直接返回;另一种是内存里没有数据,就可以肯定数据文件上是正确的结果,读入内存后返回。这样的效率最高

  • 第三种场景是,MySQL认为系统空闲的时候刷脏页,当然在系统忙的时候也要找时间刷一点脏页
  • 第四种场景是,MySQL正常关闭的时候会把内存的脏页都flush到磁盘上,这样下次MySQL启动的时候,就可以直接从磁盘上读数据,启动速度会很快

redo log写满了,要flush脏页,出现这种情况的时候,整个系统就不能再接受更新了,所有的更新都必须堵住

内存不够用了,要先将脏页写到磁盘,这种情况是常态。InnoDB用缓冲池管理内存,缓冲池中的内存页有三种状态:

  • 第一种是还没有使用的
  • 第二种是使用了并且是干净页
  • 第三种是使用了并且是脏页

InnoDB的策略是尽量使用内存,因此对于一个长时间运行的库来说,未被使用的页面很少

当要读入的数据页没有在内存的时候,就必须到缓冲池中申请一个数据页。这时候只能把最久不使用的数据页从内存中淘汰掉:如果要淘汰的是一个干净页,就直接释放出来复用;但如果是脏页,即必须将脏页先刷到磁盘,变成干净页后才能复用

刷页虽然是常态,但是出现以下两种情况,都是会明显影响性能的:

  • 一个查询要淘汰的脏页个数太多,会导致查询的响应时间明显变长
  • 日志写满,更新全部堵住,写性能跌为0,这种情况对敏感业务来说,是不能接受的

2、InnoDB刷脏页的控制策略

首先,要正确地告诉InnoDB所在主机的IO能力,这样InnoDB才能知道需要全力刷脏页的时候,可以刷多快。参数为innodb_io_capacity,建议设置成磁盘的IOPS

InnoDB的刷盘速度就是考虑脏页比例和redo log写盘速度。参数innodb_max_dirty_pages_pct是脏页比例上限,默认值是75%。脏页比例是通过Innodb_buffer_pool_pages_dirty/Innodb_buffer_pool_pages_total得到的,SQL语句如下:

mysql>  select VARIABLE_VALUE into @a from performance_schema.global_status where VARIABLE_NAME = 'Innodb_buffer_pool_pages_dirty';
select VARIABLE_VALUE into @b from performance_schema.global_status where VARIABLE_NAME = 'Innodb_buffer_pool_pages_total';
select @a/@b;

四、日志相关问题

Let’s talk about MySQL’s infrastructure and logging system

问题一:在两阶段提交的不同时刻,MySQL异常重启会出现什么现象

如果在图中时刻A的地方,也就是写入redo log处于prepare阶段之后、写binlog之前,发生了崩溃,由于此时binlog还没写,redo log也还没提交,所以崩溃恢复的时候,这个事务会回滚。这时候,binlog还没写,所以也不会传到备库

如果在图中时刻B的地方,也就是binlog写完,redo log还没commit前发生崩溃,那崩溃恢复的时候MySQL怎么处理?

崩溃恢复时的判断规则:

1)如果redo log里面的事务是完整的,也就是已经有了commit标识,则直接提交

2)如果redo log里面的事务只有完整的prepare,则判断对应的事务binlog是否存在并完整

a.如果完整,则提交事务

b.否则,回滚事务

时刻B发生崩溃对应的就是2(a)的情况,崩溃恢复过程中事务会被提交

问题二:MySQL怎么知道binlog是完整的?

一个事务的binlog是有完整格式的:

  • statement格式的binlog,最后会有COMMIT
  • row格式的binlog,最后会有一个XID event

问题三:redo log和binlog是怎么关联起来的?

它们有一个共同的数据字段,叫XID。崩溃恢复的时候,会按顺序扫描redo log:

  • 如果碰到既有prepare、又有commit的redo log,就直接提交
  • 如果碰到只有prepare、而没有commit的redo log,就拿着XID去binlog找对应的事务

问题四:redo log一般设置多大?

如果是现在常见的几个TB的磁盘的话,redo log设置为4个文件、每个文件1GB

问题五:正常运行中的实例,数据写入后的最终落盘,是从redo log更新过来的还是从buffer pool更新过来的呢?

redo log并没有记录数据页的完整数据,所以它并没有能力自己去更新磁盘数据页,也就不存在数据最终落盘是由redo log更新过去的情况

1.如果是正常运行的实例的话,数据页被修改以后,跟磁盘的数据页不一致,称为脏页。最终数据落盘,就是把内存中的数据页写盘。这个过程,甚至与redo log毫无关系

2.在崩溃恢复场景中,InnoDB如果判断到一个数据页可能在崩溃恢复的时候丢失了更新,就会将它对到内存,然后让redo log更新内存内容。更新完成后,内存页变成脏页,就回到了第一种情况的状态

问题六:redo log buffer是什么?是先修改内存,还是先写redo log文件?

在一个事务的更新过程中,日志是要写多次的。比如下面这个事务:

begin;insert into t1 ...insert into t2 ...commit;

这个事务要往两个表中插入记录,插入数据的过程中,生成的日志都得先保存起来,但又不能在还没commit的时候就直接写到redo log文件里

所以,redo log buffer就是一块内存,用来先存redo日志的。也就是说,在执行第一个insert的时候,数据的内存被修改了,redo log buffer也写入了日志。但是,真正把日志写到redo log文件,是在执行commit语句的时候做的

五、MySQL是怎么保证数据不丢的?

只要redo log和binlog保证持久化到磁盘,就能确保MySQL异常重启后,数据可以恢复

1、binlog的写入机制

事务执行过程中,先把日志写到binlog cache,事务提交的时候,再把binlog cache写到binlog文件中。一个事务的binlog是不能被拆开的,因此不论这个事务多大,也要确保一次性写入

系统给binlog cache分配了一片内存,每个线程一个,参数binlog_cache_size用于控制单个线程内binlog cache所占内存的大小。如果超过了这个参数规定的大小,就要暂存到磁盘

事务提交的时候,执行器把binlog cache里的完整事务写入到binlog中,并清空binlog cache

Let’s talk about MySQL’s infrastructure and logging system
每个线程有自己binlog cache,但是共用一份binlog文件

  • 图中的write,指的就是把日志写入到文件系统的page cache,并没有把数据持久化到磁盘,所以速度比较快
  • 图中的fsync,才是将数据持久化到磁盘的操作。一般情况下认为fsync才占磁盘的IOPS

write和fsync的时机,是由参数sync_binlog控制的:

  • sync_binlog=0的时候,表示每次提交事务都只write,不fsync
  • sync_binlog=1的时候,表示每次提交事务都会执行fsync
  • sync_binlog=N(N>1)的时候,表示每次提交事务都write,但累积N个事务后才fsync

因此,在出现IO瓶颈的场景中,将sync_binlog设置成一个比较大的值,可以提升性能,对应的风险是:如果主机发生异常重启,会丢失最近N个事务的binlog日志

2、redo log的写入机制

事务在执行过程中,生成的redo log是要先写到redo log buffer的。redo log buffer里面的内容不是每次生成后都要直接持久化到磁盘,也有可能在事务还没提交的时候,redo log buffer中的部分日志被持久化到磁盘

redo log可能存在三种状态,对应下图的三个颜色块

Let’s talk about MySQL’s infrastructure and logging system

这三张状态分别是:

  • Exists in the redo log buffer, physically in the MySQL process memory, which is the red part in the picture
  • is written to the disk, but not persisted, physically in the page of the file system Inside the cache, that is, the yellow part in the picture
  • is persisted to the disk, which corresponds to the hard disk, which is the green part in the picture

The log is written to the redo log buffer and Writing to the page cache is very fast, but persisting to disk is much slower.

In order to control the redo log writing strategy, InnoDB provides the innodb_flush_log_at_trx_commit parameter, which has three possible values:

  • When set to 0, it means that the redo log is only left in the redo log buffer every time a transaction is submitted.
  • When set to 1, it means that the redo log is left in the redo log buffer every time a transaction is submitted. Persist redo log directly to disk
  • When set to 2, it means that every time a transaction is submitted, the redo log is only written to the page cache

InnoDB has a background thread. Every 1 second, the log in the redo log buffer will be written to the page cache of the file system by calling write, and then fsync will be called to persist it to the disk. The redo log in the middle of transaction execution is also written directly in the redo log buffer, and these redo logs will also be persisted to the disk by the background thread. In other words, the redo log of an uncommitted transaction may have been persisted to the disk

There are two scenarios where the redo log of an uncommitted transaction will be written to the disk

1. When the space occupied by the redo log buffer is about to reach half of innodb_log_buffer_size, the background thread will actively write to the disk. Since the transaction has not been submitted, the disk writing action is only write without calling fsync, which means it only remains in the page cache of the file system

2. When a parallel transaction is submitted, the redo of this transaction is incidentally The log buffer is persisted to disk. Assume that transaction A is halfway through execution and has written some redo logs into the buffer. At this time, transaction B from another thread is submitted. If innodb_flush_log_at_trx_commit is set to 1, transaction B will persist all the logs in the redo log buffer to disk. At this time, the log of transaction A in the redo log buffer will be persisted to the disk

Two-stage commit. In terms of timing, the redo log is prepared first, then the binlog is written, and finally the redo log is committed. If innodb_flush_log_at_trx_commit is set to 1, then the redo log must be persisted once in the prepare phase.

MySQL's double 1 configuration means that both sync_binlog and innodb_flush_log_at_trx_commit are set to 1. In other words, before a transaction is fully committed, it needs to wait for two disk flushes, one for redo log (prepare stage) and one for binlog

3. Group submission mechanism

The log logical sequence number LSN is monotonically increasing and is used to correspond to each write point of the redo log. Each time a redo log with a length of length is written, the value of the LSN will be added to the length. LSN will also be written to the InnoDB data page to ensure that the data page will not be executed multiple times. Repeated redo log

Let’s talk about MySQL’s infrastructure and logging system
The above picture shows three concurrent transactions in the prepare phase, all written After completing the process of redo log buffer and persisting to disk, the corresponding LSNs are 50, 120 and 160

1.trx1 is the first to arrive and will be selected as the leader of this group

2. When trx1 starts to write to the disk, there are already three transactions in this group, and the LSN has also become 160 at this time

3. When trx1 goes to write to the disk, it brings the LSN =160, so when trx1 returns, all redo logs with LSN less than or equal to 160 have been persisted to the disk

4. At this time, trx2 and trx3 can directly return

In group submission, the more group members there are, the better the effect of saving disk IOPS

In order to allow one fsync to bring more team members, MySQL has made time-consuming optimization

Let’s talk about MySQL’s infrastructure and logging system

Binlog can also be submitted in groups. When executing step 4 of the above figure to fsync the binlog to the disk, if the binlogs of multiple transactions have been written, they will also be persisted together, which can also reduce the consumption of IOPS

If you want to improve the effect of binlog group submission, you can achieve this by setting the two parameters binlog_group_commit_sync_delay and binlog_group_commit_sync_no_delay_count

1. The binlog_group_commit_sync_delay parameter indicates how many microseconds to delay before calling fsync

2 The .binlog_group_commit_sync_no_delay_count parameter indicates how many times to accumulate before calling fsync

As long as one of these two conditions is met, fsync will be called

WAL mechanism mainly benefits from Two aspects:

  • Both redo log and binlog are written sequentially. Sequential writing to disk is faster than random writing.
  • Group submission mechanism can significantly reduce disk order IOPS consumption

4. If MySQL now has a performance bottleneck, and the bottleneck is on IO, what methods can be used to improve performance

1. Set binlog_group_commit_sync_delay (delay for how many microseconds before calling fsync) and binlog_group_commit_sync_no_delay_count (how many times to accumulate before calling fsync) parameters to reduce the number of binlog writes to disk. This method is implemented based on additional intentional waiting, so it may increase the response time of the statement, but there is no risk of data loss

2. Set sync_binlog to a value greater than 1 (write every time a transaction is committed) , but fsync only after N transactions have been accumulated). The risk of doing this is that the binlog log will be lost when the host is powered off

3. Set innodb_flush_log_at_trx_commit to 2 (only the redo log is written to the page cache every time a transaction is committed). The risk of doing this is that data will be lost when the host loses power

[Related recommendations: mysql video tutorial]

The above is the detailed content of Let’s talk about MySQL’s infrastructure and logging system. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:csdn.net. If there is any infringement, please contact admin@php.cn delete