Home  >  Article  >  Operation and Maintenance  >  What does linux high concurrency mean?

What does linux high concurrency mean?

青灯夜游
青灯夜游Original
2022-11-11 15:25:141778browse

In Linux, high concurrency is a situation of "encountering a large number of operation requests in a short period of time" encountered during system operation. It mainly occurs when the web system intensively accesses a large number of requests and receives a large number of requests; this situation The occurrence will cause the system to perform a large number of operations during this period, such as requests for resources, database operations, etc.

What does linux high concurrency mean?

#The operating environment of this tutorial: linux7.3 system, Dell G3 computer.

High Concurrency Concept

1.1 High Concurrency Concept

High Concurrency It is one of the factors that must be considered in the design of Internet distributed system architecture. It usually refers to ensuring that the system can handle many requests in parallel at the same time through design. High concurrency (High Concurrency) is a situation encountered during the operation of the system where "a large number of operation requests are encountered in a short period of time". It mainly occurs when the web system concentrates a large number of accesses and receives a large number of requests ( For example: 12306 ticket grabbing situation; Tmall Double Eleven event). The occurrence of this situation will cause the system to perform a large number of operations during this period, such as requests for resources, database operations, etc.

1.2 High concurrency related indicators

Response Time(Response Time)
  • The system responds to the request time. For example, it takes 200ms for the system to process an HTTP request. This 200ms is the response time of the system.
    Throughput(Throughput)
  • The number of requests processed per unit time
    Query rate per second QPS (Query Per Second)
  • Number of response requests per second. In the Internet field, the distinction between this indicator and throughput is not so obvious
    Number of concurrent users (User Concurrence)
  • The number of users who simultaneously carry normal use of system functions. For example, in an instant messaging system, the number of simultaneous online users represents the number of concurrent users of the system to a certain extent

##1.3 High concurrency optimization

Limit on the maximum number of open files in a single process
  • Kernel TCP parameters
  • IO events Allocation mechanism

2 Improve the concurrency capability of the system2.1 Vertical expansion

Improve stand-alone processing capability

  • Enhance stand-alone hardware performance, for example: increase the number of CPU cores such as 32 cores, upgrade to a better network card such as 10G, and upgrade to a better hard drive such as SSD , expand the hard disk capacity such as 2T, expand the system memory such as 128G
    • to improve the performance of single-machine architecture, for example: use Cache to reduce the number of IO times, use asynchronous to increase single service throughput, use lock-free Data structure to reduce response time

2.2 Horizontal expansionAdd servers quantity, system performance can be linearly expanded

2.3 Common Internet layered architecture(1) Client layer: typical calls The party is a browser or mobile application APP

(2) Reverse proxy layer: system entrance, reverse proxy

(3) Site application layer: implement core application logic, return html or json

(4) Service layer: If servitization is implemented, there will be this layer

(5) Data-cache layer: Cache accelerates access to storage

(6 ) Data-database layer: database solidified data storage

2.4 Horizontal expansion architecture

The level of the reverse proxy layer Extension
  • When nginx becomes a bottleneck, you only need to increase the number of servers, deploy new nginx services, and add an external network IP to expand the performance of the reverse proxy layer. Theoretically infinitely high concurrency
  • is achieved through "DNS polling": dns-server is configured with multiple resolution IPs for a domain name, and each DNS resolution request accesses dns-server , these IPs will be polled and returned
  • Horizontal expansion of the site layer
  • is implemented through "nginx". By modifying nginx.conf, you can set up multiple web backends
    • When the web backend becomes a bottleneck, just increase the number of servers and add a new web service deployment in the nginx configuration Configuring a new web backend can expand the performance of the site layer and achieve theoretically infinitely high concurrency
    Horizontal expansion of the service layer
  • Achieved through "Service Connection Pool"
    • When the site layer calls the downstream service layer RPC-server through RPC-client, the The connection pool will establish multiple connections with downstream services. When the service becomes a bottleneck, just increase the number of servers, deploy new services, and establish new downstream service connections at RPC-client to expand the service layer performance and achieve theoretical results. Unlimited high concurrency on
    Horizontal expansion of the data layer
  • The data layer (cache, database) involves data Horizontal expansion horizontally splits the data (cache, database) originally stored on one server to different servers to achieve the purpose of expanding system performance.
    • Storage a certain range of data

      • user0 library, store uid range 1-1kw

      • user1 library, Storage uid range 1kw-2kw

    • Split horizontally according to hash

      • user0 library, store even uid data

      • user1 library, stores odd uid data

    # #Three single Linux servers improve concurrency

    3.1 iptables related

      ##Close the iptables firewall and prevent the kernel from loading the iptables module
    • Limit on the maximum number of open files in a single process (the default maximum number of open files in a single process is 1024)
    • ulimit –n 65535

    • Software that modifies the number of open files in the Linux system for users Limits and hard limits
    • vim /etc/security/limits.conf
      * soft nofile 65535   #'*'表示修改所有用户的限制
      * hard nofile 65535
      #用户完成系统登录后读取/etc/security/limits.conf文件
      vim /etc/pam.d/login
      sessionrequired /lib/security/pam_limits.so

    3.2 Kernel TCP parametersTIME_WAIT status

      After the TCP connection is disconnected, it will remain in the TIME_WAIT state for a certain period of time before the port is released. When there are too many concurrent requests, a large number of connections in the TIME_WAIT state will be generated. If they cannot be disconnected in time, a large amount of port resources and server resources will be occupied
    • #查看TIME_WAIT状态连接
      netstat -n | grep tcp | grep TIME_WAIT |wc -l
      # vim /etc/sysctl.conf
      net.ipv4.tcp_syncookies= 1 #表示开启SYNCookies。当出现SYN等待队列溢出时,启用cookies来处理,可防范少量SYN攻击,默认为0,表示关闭;
      net.ipv4.tcp_tw_reuse= 1 #表示开启重用。允许将TIME-WAITsockets重新用于新的TCP连接,默认为0,表示关闭;
      net.ipv4.tcp_tw_recycle= 1 #表示开启TCP连接中TIME-WAITsockets的快速回收,默认为0,表示关闭;
      net.ipv4.tcp_fin_timeout= 30  #修改系統默认的TIMEOUT 时间。
    • Related recommendations: "
    Linux video tutorial

    The above is the detailed content of What does linux high concurrency mean?. For more information, please follow other related articles on the PHP Chinese website!

    Statement:
    The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn