搜索
首页数据库mysql教程How to improve InnoDB performance by 55% for write-bound loa_MySQL

During April’s Percona Live MySQL Conference and Expo 2014, I attended atalkon MySQL 5.7 performance an scalability given by Dimitri Kravtchuk, the Oracle MySQL benchmark specialist. He mentioned at some point that the InnoDB double write buffer was a real performance killer. For the ones that don’t know what the innodb double write buffer is, it is a disk buffer were pages are written before being written to the actual data file. Upon restart, pages in the double write buffer are rewritten to their data files if complete. This is to avoid data file corruption with half written pages. I knew it has an impact on performance, on ZFS since it is transactional I always disable it, but I never realized how important the performance impact could be. Back from PLMCE, a friend had dropped home a Dell R320 server, asking me to setup the OS and test it. How best to test a new server than to run benchmarks on it, so here we go!

ZFS is not the only transactional filesystem, ext4, with the option “data=journal”, can also be transactional. So, the question is: is it better to have the InnoDB double write buffer enabled or to use the ext4 transaction log. Also, if this is better, how does it compare with xfs, the filesystem I use to propose but which do not support transactions.

Methodology

The goal is to stress the double write buffer so the load has to be write intensive. The server has a simple mirror of two 7.2k rpm drives. There is no controller write cache and the drives write caches are disabled. I decided to use the Perconatpcc-mysqlbenchmark tool and with 200 warehouses, the total dataset size was around 18G, fitting all within the Innodb buffer pool (server has 24GB). Here’re the relevant part of the my.cnf:

innodb_read_io_threads=4innodb_write_io_threads=8#To stress the double write bufferinnodb_buffer_pool_size=20Ginnodb_buffer_pool_load_at_startup=ONinnodb_log_file_size = 32M #Small log files, more page flushinnodb_log_files_in_group=2innodb_file_per_table=1innodb_log_buffer_size=8Minnodb_flush_method=O_DIRECTinnodb_flush_log_at_trx_commit=0skip-innodb_doublewrite#commented or not depending on test

innodb_read_io_threads=4

innodb_write_io_threads=8  #To stress the double write buffer

innodb_buffer_pool_size=20G

innodb_buffer_pool_load_at_startup=ON

innodb_log_file_size=32M#Small log files, more page flush

innodb_log_files_in_group=2

innodb_file_per_table=1

innodb_log_buffer_size=8M

innodb_flush_method=O_DIRECT

innodb_flush_log_at_trx_commit=0

skip-innodb_doublewrite  #commented or not depending on test  

So, I generated the dataset for 200 warehouses, added they keys but not the foreign key constraints, loaded all that in the buffer pool with a few queries and dumped the buffer pool. Then, with MySQL stopped, I did a file level backup to a different partition. I used the MySQL 5.6.16 version that comes with Ubuntu 14.04, at the time Percona server was not available for 14.04. Each benchmark followed this procedure:

  1. Stop mysql
  2. umount /var/lib/mysql
  3. comment or uncomment skip-innodb_doublewrite in my.cnf
  4. mount /var/lib/mysql with specific options
  5. copy the reference backup to /var/lib/mysql
  6. Start mysql and wait for the buffer pool load to complete
  7. start tpcc from another server

The tpcc_start I used it the following:

./tpcc_start -h10.2.2.247 -P3306 -dtpcc -utpcc -ptpcc -w200 -c32 -r300 -l3600 -i60

./tpcc_start-h10.2.2.247-P3306-dtpcc-utpcc-ptpcc-w200-c32-r300-l3600-i60

I used 32 connections, let the tool run for 300s of warm up, enough to reach a steady level of dirty pages, and then, I let the benchmark run for one hour, reporting results every minute.

Results

Test: Double write buffer File system options Average NOPTM over 1h
ext4_dw Yes rw 690
ext4_dionolock_dw Yes rw,dioread_nolock 668
ext4_nodw No rw 1107
ext4trx_nodw No rw,data=journal 1066
xfs_dw Yes xfs rw,noatime 754

So, from the above table, the first test I did was the common ext4 with the Innodb double write buffer enabled and it yielded 690 new order transactions per minute (NOTPM). Reading theext4 doc, I also wanted to try the “dioread_nolock” setting that is supposed to reduce mutex contention and this time, I got slightly less 668 NOTPM. The difference is within the measurement error and isn’t significant. Removing the Innodb double write buffer, although unsafe, boosted the throughput to 1107 NOTPM, a 60% increase! Wow, indeed the double write buffer has a huge impact. But what is the impact of asking the file system to replace the innodb double write buffer? Surprisingly, the performance level is only slightly lower at 1066 NOTPM and vmstat did report twice the amount writes. I needed to redo the tests a few times to convince myself. Getting a 55% increase in performance with the same hardware is not common except when some trivial configuration errors are made. Finally, I used to propose xfs with the Innodb double write buffer enabled to customers, that’s about 10% higher than ext4 with the Innodb double write buffer, close to what I was expecting. The graphic below presents the numbers in a more visual form.

TPCC NOTPM for various configurations

TPCC NOTPM for various configurations

In term of performance stability, you’ll find below a graphic of the per minute NOTPM output for three of the tests, ext4 non-transactional with the double write buffer, ext4 transactional without the double write buffer and xfs with the double write buffer. The dispersion is qualitatively similar for all three. The values presented above are just the averages of those data sets.

TPCC NOTPM evolution over time

TPCC NOTPM evolution over time

Safety

Innodb data corruption is not fun and removing the innodb double write buffer is a bit scary. In order to be sure it is safe, I executed the following procedure ten times:

  1. Start mysql and wait for recovery and for the buffer pool load to complete
  2. Check the error log for no corruption
  3. start tpcc from another server
  4. After about 10 minutes, physically unplug the server
  5. Plug back and restart the server

I observed no corruption. I was still a bit preoccupied, what if the test is wrong? I removed the “data=journal” mount option and did a new run. I got corruption the first time. So given what the procedure I followed and the number of crash tests, I think it is reasonable to assume it is safe to replace the InnoDB double write buffer by the ext4 transactional journal.

I also looked at the kernel ext4 sources and changelog. Up to recently, before kernel 3.2, O_DIRECT wasn’t supported with data=journal and MySQL would have issued a warning in the error log. Now, with recent kernels, O_DIRECT is mapped to O_DSYNC and O_DIRECT is faked, always for data=journal, which is exactly what is needed. Indeed, I tried “innodb_flush_method = O_DSYNC” and found the same results. With older kernels Istronglyadvise to use the “innodb_flush_method = O_DSYNC” setting to make sure files are opened is a way that will cause them to be transactional for ext4. As always, test thoroughfully, I only tested on Ubuntu 14.04.

Impacts on MyISAM

Since we are no longer really using O_DIRECT, even if set in my.cnf, the OS file cache will be used for InnoDB data. If the database is only using InnoDB that’s not a big deal but if MyISAM is significantly used, that may cause performance issues since MyISAM relies on the OS file cache so be warned.

Fast SSDs

If you have a SSD setup that doesn’t offer a transactional file system like the FusionIO directFS, a very interesting setup would be to mix spinning drives and SSDs. For example, let’s suppose we have a mirror of spinning drives handled by a raid controller with a write cache (and a BBU) and an SSD storage on a PCIe card. To reduce the write load to the SSD, we could send the file system journal to the spinning drives using the “journal_path=path” or “journal_dev=devnum” options of ext4. The raid controller write cache would do an awesome job at merging the write operations for the file system journal and the amount of write operations going to the SSD would be cut by half. I don’t have access to such a setup but it seems very promising performance wise.

Conclusion

Like ZFS, ext4 can be transactional and replacing the InnoDB double write buffer with the file system transaction journal yield a 55% increase in performance for write intensive workload. Performance gains are also expected for SSD and mixed spinning/SSD configurations.

声明
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn
减少在Docker中使用MySQL内存的使用减少在Docker中使用MySQL内存的使用Mar 04, 2025 pm 03:52 PM

本文探讨了Docker中的优化MySQL内存使用量。 它讨论了监视技术(Docker统计,性能架构,外部工具)和配置策略。 其中包括Docker内存限制,交换和cgroups

mysql无法打开共享库怎么解决mysql无法打开共享库怎么解决Mar 04, 2025 pm 04:01 PM

本文介绍了MySQL的“无法打开共享库”错误。 该问题源于MySQL无法找到必要的共享库(.SO/.DLL文件)。解决方案涉及通过系统软件包M验证库安装

如何使用Alter Table语句在MySQL中更改表?如何使用Alter Table语句在MySQL中更改表?Mar 19, 2025 pm 03:51 PM

本文讨论了使用MySQL的Alter Table语句修改表,包括添加/删除列,重命名表/列以及更改列数据类型。

在 Linux 中运行 MySQl(有/没有带有 phpmyadmin 的 podman 容器)在 Linux 中运行 MySQl(有/没有带有 phpmyadmin 的 podman 容器)Mar 04, 2025 pm 03:54 PM

本文比较使用/不使用PhpMyAdmin的Podman容器直接在Linux上安装MySQL。 它详细介绍了每种方法的安装步骤,强调了Podman在孤立,可移植性和可重复性方面的优势,还

什么是 SQLite?全面概述什么是 SQLite?全面概述Mar 04, 2025 pm 03:55 PM

本文提供了SQLite的全面概述,SQLite是一个独立的,无服务器的关系数据库。 它详细介绍了SQLite的优势(简单,可移植性,易用性)和缺点(并发限制,可伸缩性挑战)。 c

在MacOS上运行多个MySQL版本:逐步指南在MacOS上运行多个MySQL版本:逐步指南Mar 04, 2025 pm 03:49 PM

本指南展示了使用自制在MacOS上安装和管理多个MySQL版本。 它强调使用自制装置隔离安装,以防止冲突。 本文详细详细介绍了安装,起始/停止服务和最佳PRA

如何为MySQL连接配置SSL/TLS加密?如何为MySQL连接配置SSL/TLS加密?Mar 18, 2025 pm 12:01 PM

文章讨论了为MySQL配置SSL/TLS加密,包括证书生成和验证。主要问题是使用自签名证书的安全含义。[角色计数:159]

哪些流行的MySQL GUI工具(例如MySQL Workbench,PhpMyAdmin)是什么?哪些流行的MySQL GUI工具(例如MySQL Workbench,PhpMyAdmin)是什么?Mar 21, 2025 pm 06:28 PM

文章讨论了流行的MySQL GUI工具,例如MySQL Workbench和PhpMyAdmin,比较了它们对初学者和高级用户的功能和适合性。[159个字符]

See all articles

热AI工具

Undresser.AI Undress

Undresser.AI Undress

人工智能驱动的应用程序,用于创建逼真的裸体照片

AI Clothes Remover

AI Clothes Remover

用于从照片中去除衣服的在线人工智能工具。

Undress AI Tool

Undress AI Tool

免费脱衣服图片

Clothoff.io

Clothoff.io

AI脱衣机

AI Hentai Generator

AI Hentai Generator

免费生成ai无尽的。

热门文章

R.E.P.O.能量晶体解释及其做什么(黄色晶体)
2 周前By尊渡假赌尊渡假赌尊渡假赌
仓库:如何复兴队友
4 周前By尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island冒险:如何获得巨型种子
3 周前By尊渡假赌尊渡假赌尊渡假赌

热工具

SublimeText3 Mac版

SublimeText3 Mac版

神级代码编辑软件(SublimeText3)

SublimeText3 Linux新版

SublimeText3 Linux新版

SublimeText3 Linux最新版

SecLists

SecLists

SecLists是最终安全测试人员的伙伴。它是一个包含各种类型列表的集合,这些列表在安全评估过程中经常使用,都在一个地方。SecLists通过方便地提供安全测试人员可能需要的所有列表,帮助提高安全测试的效率和生产力。列表类型包括用户名、密码、URL、模糊测试有效载荷、敏感数据模式、Web shell等等。测试人员只需将此存储库拉到新的测试机上,他就可以访问到所需的每种类型的列表。

WebStorm Mac版

WebStorm Mac版

好用的JavaScript开发工具

SublimeText3 英文版

SublimeText3 英文版

推荐:为Win版本,支持代码提示!