Home  >  Q&A  >  body text

nginx - Are the number of requests per second and concurrency the same concept?

Windows tested the server with apache’s ab

ab -c 100 -n 1000 http://www.xxxxx.com

It is found that the number of requests processed per second is only 20, and other waiting times do not mean that the default concurrency of nginx is very high?

Still, the number of requests processed per second and concurrency are not the same concepts.

Processes per second Concurrency Stress Test Please clear the way. . .

某草草某草草2713 days ago1355

reply all(4)I'll reply

  • 伊谢尔伦

    伊谢尔伦2017-05-16 17:24:51

    Only talk about the situation of one server.
    The number of concurrency refers to the number of requests arriving at the same time, which is from the client's perspective; the server's concurrent processing capability refers to how many requests the server can handle at the same time. Ideally (without process switching), concurrency The processing power is equal to the number of cores of the CPU.
    Based on what is said above, if there are 8 cores and the requested tasks are all pure computing tasks without IO, then each core can obviously process more than one request per second. Assume that it can handle 10,000 (simple computing tasks) per second. If this number is completely possible), then this server can handle 80,000 requests per second.
    Now add IO, if the CPU has to wait during IO and cannot do other things, then obviously the number of requests that the CPU can handle per second will be greatly reduced, maybe from 10,000 to hundreds or even dozens or several.
    Next, you can add various factors such as process switching and algorithm efficiency. After adding these factors one by one, you can get a complex but real server.

    reply
    0
  • PHPz

    PHPz2017-05-16 17:24:51

    Not the same concept, but they are related:
    Suppose the average response time is t (unit is milliseconds), the concurrency is c, and the number of requests processed per second is q, then:
    q = (1000/t) * c
    This is the relationship;
    Think To increase q, there are only two ways: 1) Lower t 2) Increase c
    For '1', it can only be achieved by optimizing the code. You can only try your best, and the improvement is often limited;
    For '2', usually c It is related to the request processing model of your server program. If your server program is in the mode of "one thread corresponds to one request", then the maximum value of c is limited by how many threads you can support; if it is "one process corresponds to one request" mode, then the maximum value of c is subject to the maximum number of processes;

    In the process of increasing c, one thing that must be noted is that the more threads/processes there are, the more context switching and thread/process scheduling overhead will increase. This will significantly and indirectly increase the value of t and prevent q. As the value of c increases proportionally, blindly increasing c will usually not produce good results. The most appropriate value of c should be determined based on experimental experiments

    In addition, there is a special situation: if the business determines that the service provided by the server has the characteristics of "small data volume and long return time", that is, this is a business type that is not busy but very slow, then NIO can be used Mode provides services, for example, nginx uses nio mode by default;
    In this mode, the c value is no longer related to the number of threads/processes, but only to the "number of socket connections". Usually the "number of socket connections" can be very large, On a specially configured Linux server, millions of socket connections can be supported at the same time. In this case, c can reach 1 million;
    Under such a high c value, no matter how large t is, it can still support a Very high q. At the same time, the number of real threads/processes can only be opened to be consistent with the number of cpu cores in order to maximize cpu utilization;
    Of course, the premise of all this is that the business has "small data volume and long return time" "Characteristics

    reply
    0
  • 怪我咯

    怪我咯2017-05-16 17:24:51

    We assume that your website is a static site and all requests go through nginx. Then you need to confirm that the network communication between the test machine and the server is smooth. Tools like ab are not very convincing for stress testing. We recommend jmeter or loadrunner. During the stress test, it is necessary to ensure that the response time curve of the test is stable for a certain period of time before it is considered to be the real performance of the server being tested. This is because a certain warm-up time is required at the beginning of the test. Generally, the curve will stabilize after a certain period of time. , then judge the current response time.

    reply
    0
  • 为情所困

    为情所困2017-05-16 17:24:51

    Requests per second = total number of requests completed within a period of time / time

    The amount of concurrency is the number of connections currently maintained. View all connections through netstat -net.

    For example:

    netstat -ntp |grep -i "80" | wc -l

    reply
    0
  • Cancelreply