Home  >  Article  >  Operation and Maintenance  >  The performance parameters of Linux servers

The performance parameters of Linux servers

步履不停
步履不停Original
2019-07-02 16:28:013074browse

The performance parameters of Linux servers

When a server based on the Linux operating system is running, various parameter information will also be represented. Generally speaking, operation and maintenance personnel and system administrators will be extremely sensitive to this data, but these parameters are also very important to developers, especially when your program is not working properly, these clues can often help quickly locate and track problems. .

Here are just some simple tools to view the relevant parameters of the system. Of course, many tools also work by analyzing and processing data under /proc and /sys, and those for more detailed and professional performance monitoring and tuning, You may also need more professional tools (perf, systemtap, etc.) and technology to complete. After all, system performance monitoring itself is a university subject.

The performance parameters of Linux servers

1. CPU and Memory Category

1.1 top

➜ ~ top

The performance parameters of Linux servers

The three values ​​after the first line are the average load of the system in the previous 1, 5, and 15 years. It can also be seen that the system load has an upward, stable, and downward trend. , when this value exceeds the number of CPU executable units, it means that the CPU performance has been saturated and has become a bottleneck.

The second line counts the system’s task status information. Needless to say, running naturally includes those running on the CPU and those that will be scheduled to run; sleeping is usually a task waiting for an event (such as IO operation) to be completed, and the subdivision can include interruptible and uninterruptible types; stopped is something that is suspended Tasks usually send SIGSTOP or operate Ctrl-Z on a foreground task to suspend it; zombie tasks, although resources will be automatically recycled when the process terminates, the task descriptor containing the exit task needs to be accessed by the parent process before it can be released. This kind of process Displayed as defunct status, whether it is because the parent process exited early or did not call wait. When such a process occurs, special attention should be paid to whether the program design is incorrect. The third line of CPU usage has the following situations according to the type:

● (us) user: The time occupied by the CPU in low nice value (high priority) user mode (nice

● (sy) system: The time spent by the CPU in kernel mode, the operating system calls ( system call) falls into the kernel state from user mode to execute specific services; usually the value will be smaller, but when the IO performed by the server is relatively intensive, the value will be larger

 ● (ni ) Nice: The time spent by the CPU running at low priority in user mode with high nice value (low priority) (nice>0). By default, the newly started process nice=0 will not be included here unless you manually modify the nice value of the program through renice or setpriority()

●(id) idle: CPU is in idle state (executing kernel Time occupied by idle handler

● (wa) iowait: Time occupied waiting for IO to complete

● (hi) irq: Time consumed by the system processing hardware interrupts

● (si) softirq: The time it takes for the system to process softirqs. Remember that softirqs are divided into softirqs, tasklets (actually a special case of the former), and work queues. I don’t know what time is counted here. After all, work The execution of queues is no longer in the interrupt context

●(st) steal: It is meaningful in the case of a virtual machine, because the CPU under the virtual machine also shares the physical CPU, so this period of time indicates that the virtual machine is waiting for hypervisor scheduling CPU time also means that the hypervisor schedules the CPU to other CPUs for execution during this period, and the CPU resources during this period are "stolen". This value is not 0 on my KVM VPS machine, but it is only of the order of 0.1. Can it be used to determine whether the VPS is overbooked?

High CPU occupancy often means something. This also points out the corresponding troubleshooting ideas when the server CPU usage is too high:

1. When the user occupancy is too high At this time, it is usually some individual processes that occupy a large amount of CPU. At this time, it is easy to find the program through top; at this time, if you suspect that the program is abnormal, you can use perf and other ideas to find the hot calling function for further investigation;

2. When the system occupancy rate is too high, if there are many IO operations (including terminal IO), this part of the CPU occupancy rate may be high, such as on file server, database server and other types of servers, otherwise (For example, >20%) It is very likely that some parts of the kernel and driver modules have problems;

3. When the nice occupancy rate is too high, it is usually an intentional behavior. When the initiator of the process knows that some processes If the CPU occupies a high level, its nice value will be set to ensure that it will not flood the CPU usage requests of other processes;

4. When the iowait occupancy rate is too high, it usually means that the IO operation efficiency of some programs is very low, or the performance of the IO corresponding device is so low that the read and write operations take a long time to complete;

5. When the irq/softirq occupancy rate is too high, it is likely that there is a problem with some peripherals, resulting in a large number of irq requests. At this time, check the /proc/interrupts file to find out the problem;

6. When the steal occupancy rate is too high, the virtual machine of the unscrupulous manufacturer has oversold it!

The fourth and fifth lines are the information of physical memory and virtual memory (swap partition): total = free used buff/cache. Now the buffers and cached Mem information are summed together, but buffers and cached

Mem’s relationship is not made clear in many places. In fact, by comparing the data, these two values ​​​​are the Buffers and Cached fields in /proc/meminfo: Buffers is a block cache for raw disk, mainly caching the metadata of the file system (such as super block information, etc.) in the form of raw block. , this value is generally relatively small (about 20M); Cached is used to read and cache certain specific files to increase file access efficiency. It can be said to be used for file caching in the file system.

And avail Mem is a new parameter value, used to indicate how much memory space can be given to a newly opened program without swapping, which is roughly equivalent to free buff/cached, and this also confirms According to the above statement, free buffers cached Mem is the real available physical memory. Moreover, using a swap partition is not necessarily a bad thing, so the swap partition usage is not a serious parameter, but frequent swap in/out is not a good thing. This situation requires attention and usually indicates a shortage of physical memory.

Finally is the resource usage list of each program, where the CPU usage is the sum of the usage of all CPU cores. Usually when top is executed, the program itself will read a large number of /proc operations, so basically the top program itself will be among the best.

Although top is very powerful, it is usually used for real-time monitoring of system information on the console. It is not suitable for monitoring system load information for a long time (days or months). At the same time, short-lived processes will also be missed and cannot be provided. Output statistics.

1.2 vmstat

vmstat is another commonly used system detection tool besides top. The screenshot below is the system load when I compiled boost with -j4.

The performance parameters of Linux servers

r represents the number of runnable processes, and the data is roughly consistent; b represents the number of uninterruptible sleeping processes; swpd represents the amount of virtual memory used, which is consistent with top-Swap The value of -used has a meaning, and as the manual says, usually the number of buffers is much smaller than cached Mem. Buffers are generally of the order of 20M; bi and bo in the io field indicate the number of data received and sent to the disk per second. Number of blocks (blocks/s); in in the system field indicates the number of system interrupts per second (including clock interrupts), and cs indicates the number of context switches due to process switching.

Speaking of this, I think that many people used to struggle with whether the -j parameter was CPU Core or CPU Core 1 when compiling the Linux kernel? By modifying the -j parameter value above, compiling boost and linux kernel while turning on vmstat monitoring, we found that the context switch basically did not change in both cases, and only after significantly increasing the -j value will the context switch increase significantly. It seems unnecessary I'm too obsessed with this parameter, although I haven't tested the specific compilation time yet. The information says that if it is not in the system startup or benchmark state, there must be something wrong with the program with the parameter context switch>100000.

1.3 pidstat

If you want to conduct comprehensive and detailed tracking of a certain process, nothing is more suitable than pidstat - stack space, page missing conditions, active and passive switching and other information are collected fundus. The most useful parameter of this command is -t, which can list detailed information about each thread in the process.

-r: Displays page faults and memory usage. Page faults are when the program needs to access a page that is mapped in the virtual memory space but has not yet been loaded into physical memory. There are two main page faults. The type is

minflt/s, which refers to minor faults. When the physical page that needs to be accessed already exists in the physical memory for some reasons (such as shared pages, cache mechanisms, etc.), it is only in the page table of the current process. There is no reference in the MMU. The MMU only needs to set the corresponding entry. The cost is quite small.

majflt/s refers to major faults. The MMU needs to apply for a free physical page in the currently available physical memory ( If there are no free pages available, you need to switch other physical pages to the swap space to release the free physical pages), then load data from the outside into the physical page, and set the corresponding entry. This cost is quite high. , there are several data-level differences from the former

-s: stack usage, including the stack space reserved for threads by StkSize, and the stack space actually used by StkRef. Use ulimit -s to find that the default stack space on CentOS 6.x is 10240K, while the default stack space size on CentOS 7.x and Ubuntu series is 8196K

The performance parameters of Linux servers

-u: CPU usage, the parameters are similar to the previous ones

-w: The number of thread context switches, also subdivided into cswch/s due to waiting for resources, etc. Statistics on active switching caused by factors and passive switching caused by nvcswch/s thread CPU time

It will be very troublesome if you first get the pid of the program through ps every time and then operate pidstat, so this is the killer -C You can specify a certain string, and if the Command contains this string, the program information will be printed and counted. -l can display the complete program name and parameters ➜ ~ pidstat -w -t -C "ailaw" - l

It seems that when viewing a single, especially multi-threaded task, pidstat is better than the commonly used ps!

1.4 Others

When you need to monitor the situation of a single CPU separately, in addition to htop, you can also use mpstat to check whether the workload of each Core on the SMP processor is load balanced and whether there are certain Hotspot threads occupy Core. ➜ ~ mpstat -P ALL 1

If you want to directly monitor the resources occupied by a certain process, you can use top -u taozj to filter out other user-independent processes, or you can use the following method to select, ps The command can customize the entry information that needs to be printed:

while :; do ps -eo user,pid,ni,pri,pcpu,psr,comm | grep 'ailawd'; sleep 1; done

If you want to clarify the inheritance relationship, the following commonly used parameter can be used to display the process tree structure. The display effect is much more detailed and beautiful than pstree

➜ ~ ps axjf

II , Disk IO class

iotop can intuitively display the real-time disk reading rate of each process and thread; lsof can not only display the open information of ordinary files (user), but also operate /dev/sda1 This type of device file opening information, for example, when the partition cannot be umounted, you can use lsof to find out the usage status of the disk partition, and adding the fg parameter can also additionally display the file opening flag mark.

2.1 iostat

➜ ~ iostat -xz 1

In fact, whether you use iostat -xz 1 or sar -d 1, the important parameters for the disk are:

avgqu-s: The average length of the waiting queue for I/O requests sent to the device. For a single disk, a value > 1 indicates that the device is saturated, except for logical disks with multiple disk arrays.

await( r_await, w_await): average waiting time (ms) for each device I/O request operation, including the sum of the time for the request to be queued and served;

svctm: sent to the device I/O request The average service time (ms). If svctm is very close to await, it means there is almost no I/O waiting and the disk performance is very good. Otherwise, the disk queue waiting time is long and the disk response is poor;

%util: The usage of the device indicates the proportion of I/O working time per second. When %util>60%, the performance of a single disk will decrease (reflected in the increase of await). When it is close to 100%, the performance of the device will decrease. saturated, except for logical disks with multiple disk arrays;

Also, although the monitored disk performance is relatively poor, it does not necessarily affect the response of the application. The kernel usually uses I /O asynchronously technology uses read-write caching technology to improve performance, but this is restricted by the physical memory limitations above.

The above parameters are also applicable to network file systems.

3. Network Category

The importance of network performance to the server is self-evident. The tool iptraf can intuitively display the sending and receiving speed information of the network card, making the comparison simple and convenient. Similar throughput information can also be obtained through sar -n DEV 1, and network cards are equipped with maximum rate information as standard, such as 100M network cards and Gigabit network cards, so it is easy to check the device utilization.

Usually, the transmission rate of the network card is not the most important concern in network development, but the packet loss rate, retransmission rate, network delay and other information of the specific UDP and TCP connections.

3.1 netstat

➜ ~ netstat -s

Displays the overall data information of each protocol since the system started. Although the parameter information is relatively rich and useful, the cumulative value cannot be obtained until the difference between two runs is used to obtain the network status information of the current system, or you can use a watch to visualize the numerical change trend. Therefore, netstat is usually used to detect port and connection information:

netstat –all(a) –numeric(n) –tcp(t) –udp(u) –timers(o) –listening(l) –program(p)

–timers can cancel the reverse query of the domain name and speed up the display speed; the more commonly used ones are

➜ ~ netstat -antp #list All TCP connections

➜ ~ netstat -nltp #List all local TCP listening sockets, do not add the -a parameter

3.2 sar

sar This tool is too It is so powerful that it can control everything from CPU to disk to page exchange. The -n used here is mainly used to analyze network activities. Although the network also breaks down the data of various protocols at various levels such as NFS, IP, ICMP, and SOCK. Information, we only care about TCP and UDP. In addition to displaying the sending and receiving status of segments and datagrams under normal circumstances, the following command also includes

TCP ➜ ~ sudo sar -n TCP,ETCP 1

The performance parameters of Linux servers

active/s: TCP connection initiated locally, such as through connect(), the TCP status changes from CLOSED -> SYN-SENT

passive/s: TCP connection initiated remotely, such as through accept (), TCP status changes from LISTEN -> SYN-RCVD

retrans/s(tcpRetransSegs): The number of TCP retransmissions per second, usually when the network quality is poor or the server is overloaded and packets are lost. , a retransmission operation will occur according to TCP's confirmation retransmission mechanism

isegerr/s(tcpInErrs): Error data packets are received every second (such as checksum failure)

UDP ➜ ~ sudo sar -n UDP 1

noport/s(udpNoPorts): Number of datagrams received per second but no application is on the specified destination port

idgmerr/s(udpInErrors) : The number of datagrams received but unable to be delivered by the machine except for the above reasons

Of course, these data can illustrate network reliability to a certain extent, but they can only be combined with specific business demand scenarios. It has meaning.

3.3 tcpdump

tcpdump has to be said to be a good thing. We all know that we like to use wireshark when debugging locally, but what should we do if there is a problem with the online server?

The references in the appendix give the idea: restore the environment and use tcpdump to capture packets. When the problem recurs (such as log display or a certain status appears), the packet capture can be ended, and tcpdump It has the -C/-W parameter, which can limit the size of the captured packet storage file. When this limit is reached, the saved packet data is automatically rotated, so the overall number of captured packets is controllable. After that, you can take the data packets off the line and use wireshark to view them however you want. Wouldn't it be fun? Although tcpdump does not have a GUI interface, its packet capture function is not weak at all. You can specify various filtering parameters such as network card, host, port, protocol, etc. The captured packets are complete and have timestamps, so the packet analysis of online programs is also It can be that simple.

The following is a small test. It can be seen that Chrome automatically initiates the establishment of three connections to the Webserver when it starts. Since the dst port parameter is restricted here, the server's response packet is filtered out. Take it down and open it with wireshark. The process of establishing a connection through SYNC and ACK is still very obvious! When using tcpdump, you need to configure the capture filter conditions as much as possible. On the one hand, it facilitates subsequent analysis. On the other hand, after tcpdump is turned on, it will affect the performance of the network card and system, which will in turn affect the performance of online services.

The performance parameters of Linux servers

For more Linux articles, please visit the Linux Tutorial column to learn!

The above is the detailed content of The performance parameters of Linux servers. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn