Home  >  Article  >  Backend Development  >  Detailed explanation of TPS, QPS, number of concurrencies, and response time

Detailed explanation of TPS, QPS, number of concurrencies, and response time

Guanhui
Guanhuiforward
2020-07-18 17:54:534247browse

Detailed explanation of TPS, QPS, number of concurrencies, and response time

QPS

Principle: 80% of visits every day Concentrate on 20% of the time, this 20% of time is called peak time.

Formula: (Total PV * 80%) / (Seconds per day * 20%) = Requests per second (QPS) at peak time.

Machine: Peak time QPS per second / QPS of a single machine = required machine.

300w PV per day on a single machine, how many QPS does this machine require?

(3000000 * 0.8) / (86400 * 0.2) = 139 (QPS).

Generally it needs to reach 139QPS because it is the peak value.

QPS

Query rate per second QPS is a measure of how much traffic a specific query server handles within a specified period of time.

Query rate per second

On the Internet, the query rate per second is often used to measure the performance of the domain name system server machine, which is QPS.

Corresponds to fetches/sec, which is the number of response requests per second, which is the maximum throughput capacity.

Computer language

A computer programming language. Used for data analysis and report output. The operating platform is MRDCL. Supported data files include ASC format and CSI format.

The CSI format is the unique data format of QPS. It is an extremely professional language used for data analysis, data cleaning and report output. It is currently most widely used in the market research industry. There are relatively few domestic applications in China.

The reason for development requires an understanding of the concepts of throughput (TPS), QPS, number of concurrencies, and response time (RT). Check it from Baidu Encyclopedia and record it as follows:

1. Response time (RT)

Response time refers to the time it takes for the system to respond to a request. Intuitively, this indicator is very consistent with people's subjective feelings about software performance, because it completely records the time it takes for the entire computer system to process requests. Since a system usually provides many functions, and the processing logic of different functions is also very different, the response time of different functions is also different, and even the response time of the same function is different under different input data. Therefore, when discussing the response time of a system, people usually refer to the average time of all functions of the system or the maximum response time of all functions. Of course, it is often necessary to discuss the average response time and maximum response time for each function or group of functions.

For single-machine application systems without concurrent operations, it is generally believed that response time is a reasonable and accurate performance indicator. It should be pointed out that the absolute value of the response time does not directly reflect the performance of the software. The performance of the software actually depends on the user's acceptance of the response time. For a game software, a response time of less than 100 milliseconds should be good. A response time of about 1 second may be barely acceptable. If the response time reaches 3 seconds, it is completely unacceptable. For the compilation system, it may take dozens of minutes or even longer to completely compile the source code of a larger-scale software, but these response times are acceptable to users.

2. Throughput (Throughput)

Throughput refers to the number of requests processed by the system per unit time. For application systems without concurrency, throughput is strictly inversely proportional to response time. In fact, throughput is the reciprocal of response time. As mentioned before, for single-user systems, response time (or system response time and application delay time) can be a good measure of system performance, but for concurrent systems, throughput is usually used as a performance indicator.

For a multi-user system, if there is only one user using the system, the average response time is t. When there are n users using it, the response time seen by each user is usually not n×t. , which is often much smaller than n×t (of course, in some special cases it may be larger than n×t, or even much larger). This is because processing each request requires a lot of resources. Since there are many steps in the processing of each request that are difficult to execute concurrently, this results in that at a specific point in time, the resources occupied are often not many. That is to say, when processing a single request, many resources may be idle at each point in time. When processing multiple requests, if the resources are configured reasonably, the average response time seen by each user does not increase as the number of users increases. linear increase. In fact, the average response time of different systems increases at different rates as the number of users increases. This is also the main reason for using throughput to measure the performance of concurrent systems. Generally speaking, throughput is a relatively common indicator. If the maximum throughput of two systems with different numbers of users and user usage patterns is basically the same, it can be judged that the processing capabilities of the two systems are basically the same.

3. Number of concurrent users

The number of concurrent users refers to the number of users that the system can simultaneously host and use system functions normally. Compared with throughput, the number of concurrent users is a more intuitive but also more general performance indicator. In fact, the number of concurrent users is a very inaccurate indicator, because different usage patterns of users will cause different users to issue different numbers of requests per unit time. Take a website system as an example. It is assumed that users can only use it after registering. However, registered users do not use the website all the time. Therefore, only some registered users are online at a specific moment. Online users will spend a lot of time browsing the website. It takes time to read the information on the website, so only some online users make requests to the system at the same time at a specific time. In this way, we will have three statistics about the number of users for the website system: the number of registered users, the number of online users, and the number of users making simultaneous requests. Since registered users may not log in to the website for a long time, using the number of registered users as a performance indicator will cause a large error. The number of online users and the number of users sending requests from colleagues can both be used as performance indicators. In comparison, it is more intuitive to use online users as the performance indicator, and it is more accurate to use the number of simultaneous requesting users as the performance indicator.

4. QPS query rate per second (Query Per Second)

The query rate per second QPS is the amount of traffic processed by a specific query server within a specified time. On the Internet, the performance of a machine acting as a Domain Name System server is often measured by the query rate per second. Corresponds to fetches/sec, which is the number of response requests per second, which is the maximum throughput capability. (It seems to be similar to TPS, but is applied to the throughput of specific scenarios)

Recommended tutorial: "PHP"

The above is the detailed content of Detailed explanation of TPS, QPS, number of concurrencies, and response time. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jb51.net. If there is any infringement, please contact admin@php.cn delete