Explore the secrets of IO performance optimization in Linux systems
In today’s context of big data and artificial intelligence, IO performance is crucial for any computer system. For Linux systems, we need to have an in-depth understanding of its IO performance model and its optimization strategies. This article will introduce in detail the Linux system IO model and performance optimization methods for different IO operations.
The current mainstream third-party IO testing tools include [neiqian]fio[/neiqian], [neiqian]iometer[/neiqian] and [neiqian]Orion[/neiqian]. Each of these three tools has its own merits.
fio is more convenient to use under Linux system, iometer is more convenient to use under window system, Orion is Oracle's IO testing software, which can simulate the reading and writing of Oracle database scenarios without installing Oracle database.
The following is an IO test on SAN storage using the fio tool on a Linux system.
1. Install fio
Method 1: Download the fio-2.1.10.tar file from the fio official website. After decompression, you can use fio after ./configure, make, and make install.
Method 2: Install through yum under Linux system, yum install -y fio
2. [neiqian]fio[/neiqian] parameter explanation
You can use fio -help to view each parameter. For specific parameters, you can view the how to document on the official website. The following is a description of several common parameters
filename=/dev/emcpowerb 支持文件系统或者裸设备,-filename=/dev/sda2或-filename=/dev/sdb direct=1 测试过程绕过机器自带的buffer,使测试结果更真实 rw=randwread 测试随机读的I/O rw=randwrite 测试随机写的I/O rw=randrw 测试随机混合写和读的I/O rw=read 测试顺序读的I/O rw=write 测试顺序写的I/O rw=rw 测试顺序混合写和读的I/O bs=4k 单次io的块文件大小为4k bsrange=512-2048 同上,提定数据块的大小范围 size=5g 本次的测试文件大小为5g,以每次4k的io进行测试 numjobs=30 本次的测试线程为30 runtime=1000 测试时间为1000秒,如果不写则一直将5g文件分4k每次写完为止 ioengine=psync io引擎使用pync方式,如果要使用libaio引擎,需要yum install libaio-devel包 rwmixwrite=30 在混合读写的模式下,写占30% group_reporting 关于显示结果的,汇总每个进程的信息 此外 lockmem=1g 只使用1g内存进行测试 zero_buffers 用0初始化系统buffer nrfiles=8 每个进程生成文件的数量
3. Detailed explanation of fio test scenarios and report generation
testing scenarios:
100% random, 100% read, 4K
fio -filename=/dev/emcpowerb -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync - bs=4k -size=1000G -numjobs=50 -runtime=180 -group_reporting -name=rand_100read_4k
100% random, 100% written, 4K
fio -filename=/dev/emcpowerb -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync - bs=4k -size=1000G -numjobs=50 -runtime=180 -group_reporting -name=rand_100write_4k
100% sequence, 100% read, 4K
fio -filename=/dev/emcpowerb -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync - bs=4k -size=1000G -numjobs=50 -runtime=180 -group_reporting -name=sqe_100read_4k
100% order, 100% writing, 4K
fio -filename=/dev/emcpowerb -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync - bs=4k -size=1000G -numjobs=50 -runtime=180 -group_reporting -name=sqe_100write_4k
100% random, 70% read, 30% write 4K
fio -filename=/dev/emcpowerb -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 - ioengine=psync -bs=4k -size=1000G -numjobs=50 -runtime=180 -group_reporting -name=randrw_70read_4k
Result report view:
[root@rac01-node02]# fio -filename=/dev/sdc4 -direct=1 -iodepth 1 -thread -rw=randrw -rwmixre ad=70 -ioengine=psync -bs=4k -size=1000G -numjobs=50 -runtime=180 -group_reporting -name=r andrw_70read_4k_local randrw_70read_4k_local: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1 ... fio-2.1.10 Starting 50 threads Jobs: 21 (f=21): [mm_m_m__mmmmmm__mm_m_mmm_mm__m_m_m] [3.4% done] [7004KB/2768KB/0KB /s] [1751/692/0 iops] [eta 01h:27m:00s] randrw_70read_4k_local: (groupid=0, jobs=50): err= 0: pid=13710: Wed May 31 10:23:31 2017 read : io=1394.2MB, bw=7926.4KB/s, iops=1981, runt=180113msec clat (usec): min=39, max=567873, avg=24323.79, stdev=25645.98 lat (usec): min=39, max=567874, avg=24324.23, stdev=25645.98 clat percentiles (msec): | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], | 30.00th=[ 9], 40.00th=[ 12], 50.00th=[ 16], 60.00th=[ 21], | 70.00th=[ 27], 80.00th=[ 38], 90.00th=[ 56], 95.00th=[ 75], | 99.00th=[ 124], 99.50th=[ 147], 99.90th=[ 208], 99.95th=[ 235], | 99.99th=[ 314] bw (KB /s): min= 15, max= 537, per=2.00%, avg=158.68, stdev=38.08 write: io=615280KB, bw=3416.8KB/s, iops=854, runt=180113msec clat (usec): min=167, max=162537, avg=2054.79, stdev=7665.24 lat (usec): min=167, max=162537, avg=2055.38, stdev=7665.23 clat percentiles (usec): | 1.00th=[ 201], 5.00th=[ 227], 10.00th=[ 249], 20.00th=[ 378], | 30.00th=[ 548], 40.00th=[ 692], 50.00th=[ 844], 60.00th=[ 996], | 70.00th=[ 1160], 80.00th=[ 1304], 90.00th=[ 1720], 95.00th=[ 3856], | 99.00th=[40192], 99.50th=[58624], 99.90th=[98816], 99.95th=[123392], | 99.99th=[148480] bw (KB /s): min= 6, max= 251, per=2.00%, avg=68.16, stdev=29.18 lat (usec) : 50=0.01%, 100=0.03%, 250=3.15%, 500=5.00%, 750=5.09% lat (usec) : 1000=4.87% lat (msec) : 2=9.64%, 4=4.06%, 10=21.42%, 20=18.08%, 50=19.91% lat (msec) : 100=7.24%, 250=1.47%, 500=0.03%, 750=0.01% cpu : usr=0.07%, sys=0.21%, ctx=522490, majf=0, minf=7 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=356911/w=153820/d=0, short=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: io=1394.2MB, aggrb=7926KB/s, minb=7926KB/s, maxb=7926KB/s, mint=180113msec, maxt=180113msec WRITE: io=615280KB, aggrb=3416KB/s, minb=3416KB/s, maxb=3416KB/s, mint=180113msec, maxt=180113msec Disk stats (read/write): sdc: ios=356874/153927, merge=0/10, ticks=8668598/310288, in_queue=8978582, util=99.99% io=执行了多少M的IO bw=平均IO带宽 iops=IOPS runt=线程运行时间 slat=提交延迟 clat=完成延迟 lat=响应时间 bw=带宽 cpu=利用率 IO depths=io队列 IO submit=单个IO提交要提交的IO数 IO complete=Like the above submit number, but for completions instead. IO issued=The number of read/write requests issued, and how many of them were short. IO latencies=IO完延迟的分布 io=总共执行了多少size的IO aggrb=group总带宽 minb=最小.平均带宽. maxb=最大平均带宽. mint=group中线程的最短运行时间. maxt=group中线程的最长运行时间. ios=所有group总共执行的IO数. merge=总共发生的IO合并数. ticks=Number of ticks we kept the disk busy. io_queue=花费在队列上的总共时间. util=磁盘利用率
4. Extended IO queue depth
At a certain moment, there are N inflight IO requests, including IO requests in the queue and IO requests being processed by the disk. N is the queue depth.
Increasing the hard disk queue depth is to make the hard disk work continuously and reduce the idle time of the hard disk.
Increase the queue depth -> Improve utilization -> Obtain peak IOPS and MBPS -> Note that the response time is within an acceptable range,
There are many ways to increase the queue depth. Using asynchronous IO and initiating multiple IO requests at the same time is equivalent to having multiple IO requests in the queue. Multi-threads initiating synchronous IO requests is equivalent to having multiple IO requests in the queue.
Increase the application IO size. After reaching the bottom layer, it will become multiple IO requests, which is equivalent to multiple IO requests in the queue. The queue depth increases.
As the queue depth increases, the waiting time of IO in the queue will also increase, resulting in longer IO response time, which requires a trade-off.
Why do we need to parallelize disk I/O? The main purpose is to improve the performance of the application. This is particularly important for virtual disks (or LUNs) composed of multiple physical disks.
If I/O is submitted one at a time, although the response time is shorter, the system throughput is very small.
In comparison, submitting multiple I/Os at one time not only shortens the head movement distance (through the elevator algorithm), but also improves IOPS.
If an elevator can only take one person at a time, then once everyone takes the elevator, they can reach their destination quickly (response time), but it will take a longer waiting time (queue length).
Submitting multiple I/Os to the disk system at once balances throughput and overall response time.
Linux system to view the default queue depth:
[root@qsdb ~]# lsscsi -l [0:0:0:0] disk DGC VRAID 0533 /dev/sda state=running queue_depth=30 scsi_level=5 type=0 device_blocked=0 timeout=30 [0:0:1:0] disk DGC VRAID 0533 /dev/sdb state=running queue_depth=30 scsi_level=5 type=0 device_blocked=0 timeout=30 [2:0:0:0] disk DGC VRAID 0533 /dev/sdd state=running queue_depth=30 scsi_level=5 type=0 device_blocked=0 timeout=30 [2:0:1:0] disk DGC VRAID 0533 /dev/sde state=running queue_depth=30 scsi_level=5 type=0 device_blocked=0 timeout=30 [4:2:0:0] disk IBM ServeRAID M5210 4.27 /dev/sdc state=running queue_depth=256 scsi_level=6 type=0 device_blocked=0 timeout=90 [9:0:0:0] cd/dvd Lenovo SATA ODD 81Y3677 IB00 /dev/sr0 state=running queue_depth=1 scsi_level=6 type=5 device_blocked=0 timeout=30
Use the dd command to set bs=2M for testing:
dd if=/dev/zero of=/dev/sdd bs=2M count=1000 oflag=direct
Recorded the read of 1000 0. Recorded the write of 1000 0. 2097152000 bytes (2.1 GB) copied, 10.6663 seconds, 197 MB/second
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdd 0.00 0.00 0.00 380.60 0.00 389734.40 1024.00 2.39 6.28 2.56 97.42
It can be seen that after 2MB IO reaches the bottom layer, it will become multiple 512KB IOs. The average queue length is 2.39. The utilization rate of this hard disk is 97%, and MBPS reaches 197MB/s.
(Why does it become 512KB IO? You can use Google to check the meaning and usage of the kernel parameter max_sectors_kb.) In other words, increasing the queue depth can test the peak value of the hard disk.
5. Detailed explanation of viewing IO command iostat in Linux system
[root@rac01-node01 /]# iostat -xd 3 Linux 3.8.13-16.2.1.el6uek.x86_64 (rac01-node01) 05/27/2017 _x8664 (40 CPU) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.05 0.75 2.50 0.50 76.59 69.83 48.96 0.00 1.17 0.47 0.14 scd0 0.00 0.00 0.02 0.00 0.11 0.00 5.25 0.00 21.37 20.94 0.05 dm-0 0.00 0.00 2.40 1.24 75.88 69.83 40.00 0.01 1.38 0.38 0.14 dm-1 0.00 0.00 0.02 0.00 0.14 0.00 8.00 0.00 0.65 0.39 0.00 sdc 0.00 0.00 0.01 0.00 0.11 0.00 10.20 0.00 0.28 0.28 0.00 sdb 0.00 0.00 0.01 0.00 0.11 0.00 10.20 0.00 0.15 0.15 0.00 sdd 0.00 0.00 0.01 0.00 0.11 0.00 10.20 0.00 0.25 0.25 0.00 sde 0.00 0.00 0.01 0.00 0.11 0.00 10.20 0.00 0.14 0.14 0.00
Output parameter description:
rrqms:每秒这个设备相关的读取请求有多少被Merge了(当系统调用需要读取数据的时候,VFS将请求发到各个FS,如果FS发现不同的读取请求读取的是相同Block的数据,FS会将这个请求合并Merge) wrqm/s:每秒这个设备相关的写入请求有多少被Merge了。 rsec/s:The number of sectors read from the device per second. wsec/s:The number of sectors written to the device per second. rKB/s:The number of kilobytes read from the device per second. wKB/s:The number of kilobytes written to the device per second. avgrq-sz:平均请求扇区的大小,The average size (in sectors) of the requests that were issued to the device. avgqu-sz:是平均请求队列的长度。毫无疑问,队列长度越短越好,The average queue length of the requests that were issued to the device. await:每一个IO请求的处理的平均时间(单位是微秒毫秒)。这里可以理解为IO的响应时间,一般地系统IO响应时间应该低于5ms,如果大于10ms就比较大了。 这个时间包括了队列时间和服务时间,也就是说,一般情况下,await大于svctm,它们的差值越小,则说明队列时间越短,反之差值越大,队列时间越长,说明系统出了问题。 svctm:表示平均每次设备I/O操作的服务时间(以毫秒为单位)。如果svctm的值与await很接近,表示几乎没有I/O等待,磁盘性能很好。 如果await的值远高于svctm的值,则表示I/O队列等待太长,系统上运行的应用程序将变慢。 %util: 在统计时间内所有处理IO时间,除以总共统计时间。例如,如果统计间隔1秒,该设备有0.8秒在处理IO,而0.2秒闲置,那么该设备的%util = 0.8/1 = 80%, 所以该参数暗示了设备的繁忙程度,一般地,如果该参数是100%表示磁盘设备已经接近满负荷运行了(当然如果是多磁盘,即使%util是100%,因为磁盘的并发能力,所以磁盘使用未必就到了瓶颈)。
Through the exploration and experiments of this article, we can see that Linux system IO performance optimization is not a problem that can be solved by simply improving the system hardware configuration, but requires comprehensive consideration and optimization for specific application scenarios and IO operations. . We can use various methods and tools to tune IO performance in Linux systems, such as using IO scheduler, using RAID array, using hard disk cache, etc. We hope that our exploration can be enlightening and helpful to the majority of users, so that your Linux system IO performance can be improved to a higher level.
The above is the detailed content of Explore the secrets of IO performance optimization in Linux systems. For more information, please follow other related articles on the PHP Chinese website!

The main tasks of Linux system administrators include system monitoring and performance tuning, user management, software package management, security management and backup, troubleshooting and resolution, performance optimization and best practices. 1. Use top, htop and other tools to monitor system performance and tune it. 2. Manage user accounts and permissions through useradd commands and other commands. 3. Use apt and yum to manage software packages to ensure system updates and security. 4. Configure a firewall, monitor logs, and perform data backup to ensure system security. 5. Troubleshoot and resolve through log analysis and tool use. 6. Optimize kernel parameters and application configuration, and follow best practices to improve system performance and stability.

Learning Linux is not difficult. 1.Linux is an open source operating system based on Unix and is widely used in servers, embedded systems and personal computers. 2. Understanding file system and permission management is the key. The file system is hierarchical, and permissions include reading, writing and execution. 3. Package management systems such as apt and dnf make software management convenient. 4. Process management is implemented through ps and top commands. 5. Start learning from basic commands such as mkdir, cd, touch and nano, and then try advanced usage such as shell scripts and text processing. 6. Common errors such as permission problems can be solved through sudo and chmod. 7. Performance optimization suggestions include using htop to monitor resources, cleaning unnecessary files, and using sy

The average annual salary of Linux administrators is $75,000 to $95,000 in the United States and €40,000 to €60,000 in Europe. To increase salary, you can: 1. Continuously learn new technologies, such as cloud computing and container technology; 2. Accumulate project experience and establish Portfolio; 3. Establish a professional network and expand your network.

The main uses of Linux include: 1. Server operating system, 2. Embedded system, 3. Desktop operating system, 4. Development and testing environment. Linux excels in these areas, providing stability, security and efficient development tools.

The Internet does not rely on a single operating system, but Linux plays an important role in it. Linux is widely used in servers and network devices and is popular for its stability, security and scalability.

The core of the Linux operating system is its command line interface, which can perform various operations through the command line. 1. File and directory operations use ls, cd, mkdir, rm and other commands to manage files and directories. 2. User and permission management ensures system security and resource allocation through useradd, passwd, chmod and other commands. 3. Process management uses ps, kill and other commands to monitor and control system processes. 4. Network operations include ping, ifconfig, ssh and other commands to configure and manage network connections. 5. System monitoring and maintenance use commands such as top, df, du to understand the system's operating status and resource usage.

Introduction Linux is a powerful operating system favored by developers, system administrators, and power users due to its flexibility and efficiency. However, frequently using long and complex commands can be tedious and er

Linux is suitable for servers, development environments, and embedded systems. 1. As a server operating system, Linux is stable and efficient, and is often used to deploy high-concurrency applications. 2. As a development environment, Linux provides efficient command line tools and package management systems to improve development efficiency. 3. In embedded systems, Linux is lightweight and customizable, suitable for environments with limited resources.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

SublimeText3 Mac version
God-level code editing software (SublimeText3)

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

Dreamweaver Mac version
Visual web development tools