Home  >  Article  >  In-depth discussion of solutions and solutions for "high concurrency and large traffic" access

In-depth discussion of solutions and solutions for "high concurrency and large traffic" access

青灯夜游
青灯夜游forward
2022-05-11 14:18:184557browse

How to solve the problem of high concurrency and large traffic? The following article will share with you the ideas and solutions for high concurrency and large traffic web. I hope it will be helpful to you!

Recommended related video courses: " tens of millions of data concurrency solutions (theoretical practice) "

Share some high concurrency interview questions: 15 PHP interview questions about high concurrency (summary)

High concurrency web architecture related concepts


  • QPS: Every The number of requests or queries per second, in the Internet field, refers to the number of corresponding requests (http requests) per second.

  • Peak QPS per second: (80% of total PVs)/(20% of seconds in 6 hours), 80% of the visits are concentrated in 20% of the time

  • Number of concurrent connections: The number of requests processed by the system simultaneously

  • Throughput: The number of requests processed per unit time (usually determined by QPS and the number of concurrency).

  • Response time: The time it takes from the request to the response. For example, it takes 100ms for the system to process an HTTP request. This 100ms is the system's response time.

  • PV: Page Views are page views and clicks, the number of pages visited by a visitor within 24 hours.

  • UV: Unique Visitor (UniQue Visitor), that is, the same visitor visits the website multiple times within a certain time range, and is only counted as 1 independent visitor.

  • Bandwidth: Calculating bandwidth focuses on two indicators, peak traffic and average page size.

  • Daily website bandwidth =PV/statistical time (converted to seconds per day)*average page size (unit KB)*8.

  • Stress test: test the maximum concurrency, test the maximum QPS. It should be noted that the concurrent testing machine needs to be separated from the machine being tested. Do not conduct concurrent testing on the online server. Observe that the CPU, memory, network, etc. of the machine where the ab test is located and the front-end machine of the tested machine do not exceed 75% of the maximum limit.

    • Concurrency

    • Response speed

    • Fault tolerance

  • Commonly used performance testing tools: ab, wrk, http_load, Web Bench, Siege, Apache JMeter.

Overall solution for high concurrency and large traffic web


  • Traffic optimization

  • Anti-hotlinking of web resources prevents third-party systems from stealing images, css, js, etc. from occupying server traffic and server bandwidth

  • Front-end optimization

  • Reduce http requests: image merging, js merging, css merging and compression. Although the file may be larger, the requests will be reduced

  • Add asynchronous requests: obtain data through the actual ajax calling interface

  • Enable browser caching and file compression (you can also enable nginx compression module)

  • cdn acceleration: solve the problem of insufficient bandwidth, data Cache the node to the cdn. When accessing, select the nearest node to reduce the bandwidth and speed up the access.

  • Establish an independent picture server: Pictures are very io-intensive. You can combine the picture server with the web The server is completely separated, and other servers can be distinguished. If the image server is built separately and is not a computing type, the configuration can be adjusted appropriately. The image server can also be clustered

  • Optimization of the server

    • Staticization of the page: dynamic page static html, reducing the load pressure of the server, page staticization penetration, staticization has an effective time

    • Dynamic language concurrent processing :Asynchronous processing, multi-threading, queue asynchronous processing

  • Optimization of database:

  • Cache of database: memcache , redis cache

  • mysql index optimization, mysql sub-database and table, mysql partition operation, mysql master-slave replication read-write separation, mysql load balancing, mysql master-slave hot standby

  • Web server optimization:

  • Load balancing: You can use ningx's reverse proxy to use load balancing, and you can use the third layer in the network layer Four-tier lvs implements load balancing

web server load balancing


liability balancing

  • Four-layer load balancing: The so-called four-layer load balancing is IP port-based load balancing

  • Seven-layer load balancing: The so-called seven-layer load balancing is load balancing based on (URL) information

Seven-layer load balancing implementation:

Debt balancing based on application layer information such as URLs. Ningx's proxy is a very powerful function. It realizes 7-layer load balancing. It has powerful functions, excellent performance, stable operation, simple and flexible configuration, and can automatically eliminate backends that are not working properly. Server, upload files can be uploaded in asynchronous mode, supports multiple distribution strategies, can assign weights, and the distribution method is flexible.

nginx load balancing strategy

  • IP Hash (built-in)

  • Weighted polling (built-in)

  • fair strategy (extension)

  • Universal hash (extension)

  • Consistent hash (extension)

1. IP Hash strategy

Another load balancing strategy built into nginx. The process is very similar to polling, except that the algorithm is the same as polling. The specific strategy has changed somewhat. The IP hash algorithm is a disguised polling algorithm

2. Weighted polling strategy

First allocate requests to high-weight machines , until the weight of the machine drops to a lower value than other machines, the request will be assigned to the next high-weight machine. When all back-end machines are down, nginx will immediately clear the flags of all machines to the initial state. To avoid causing all machines to be in a timeout state

3. Fair strategy

Judge the load situation based on the response time of the back-end server and select the load from it The lightest machine is used for offloading

Universal hash and consistent hash strategies. Universal hash is relatively simple. You can use nginx’s built-in variables as keys for hashing. Consistent hash uses a built-in consistent hash ring and supports memcache.

Four-layer load balancing is achieved

through the destination address and port in the message, plus the server selection method set by the load balancing device , determine the final choice of internal server

lvs related terms:

  • DS:director server target server, that is, load balancer

  • RS: Real Server real server, that is, back-end server

  • VIP: IP address directly facing the user, usually a public IP address

  • DIP: Director Server Ip is mainly used for internal host communication IP address

  • RIP: Real Server IP Backend real server IP address

  • CIP:Client IP

lvs three methods of load balancing:

NAT: Modify the target IP address to the IP address of the back-end RealServer

DR: Modify the target mac address to the mac address of the back-end RealServer

TUNNEL: rarely used, often used for remote disaster recovery

Advantages and disadvantages of four- and seven-layer load balancing

Fourth layer can carry a greater amount of concurrency than seven layers. Using large sites with small

seven layers can achieve more complex load balancing control, such as URL, based on session, dynamic and static separation, etc.

The seventh layer can occupy a lot of CPU time and carry concurrency

cdn acceleration


What is CDN?

Node: It can be understood as a mirror of the real server.

The full name is Content Delivery Network, which means that the content distribution network avoids bottlenecks and links on the Internet that may affect the speed and stability of data transmission as much as possible, making content transmission faster and more stable.

A layer of intelligent virtual network based on the existing Internet consisting of node servers placed throughout the network.

The cdn system can redirect the user's request to the service node closest to the user in real time based on comprehensive information such as network traffic, connections to each node, load conditions, distance to the user, and response time.

What are the advantages of cdn?

  • 1. Local cache acceleration to improve the access speed of corporate sites (especially sites containing a large number of pictures and static pages)

  • 2. Cross-operator network acceleration ensures that users on different networks receive good access quality

  • 3. Remote access users intelligently and automatically select Cache servers based on DNS load balancing technology

  • 4. Automatically generate the remote Mirror (mirror) cache server of the server. When remote users access, data is read from the cache server, which reduces the bandwidth of remote access, shares network traffic, and eases the need for sites. Web server load and other functions.

  • 5. Widely distributed CDN nodes and intelligent redundancy mechanisms between nodes can effectively prevent hacker intrusions

What is the working principle of cdn?

Traditional access: The user enters a domain name in the browser to initiate a request, resolves the domain name to obtain the server IP address, finds the corresponding server based on the IP address, and the server responds and returns data.

Use cdn to access: The user initiates a request, intelligent dns analysis (determines the geographical location and access network type based on the ip, selects the server with the shortest route and the lightest load), obtains the cache server ip, and returns the content to The user (if there is one in the cache) initiates a request to the origin site, accesses the results to the user, and stores the results in the cache server.

What are the applicable scenarios for cdn?

Accelerate the distribution of a large number of static resources in a site or application, such as css, js, images and html

How to implement cdn?

  • CDN services implemented by BAT and others

  • Layer 4 load balancing using LVS

  • You can use nginx, varnish, squid, apache trafficServer for seven-layer load balancing and cache. Use squid as a reverse proxy or nginx as a reverse proxy.

Establish an independent image server


Is independence necessary?

  • 1. Share the I/O load of the web server, separate the resource-consuming picture service, and improve the performance and stability of the server

  • 2. Can specifically optimize the image server, set up targeted caching solutions for image services, reduce bandwidth costs, and improve access speed

Why use independent domain name?

Reason: The number of concurrent browser connections under the same domain name is limited. Breaking through the browser connection limit is not good for caching due to cookies. Most web caches only cache without Cookie request causes each image request to fail to hit the cache

Is this a problem after independence?

  • How to upload and synchronize pictures

  • NPS sharing method

  • Use FTP synchronization

Dynamic page staticization


Related concepts: What is dynamic language staticization, why should it be static, staticization way of implementation.

Concurrent processing of dynamic languages


What is a process

Process is a running activity of a program in the computer on a certain data set. It is the basic unit of resource allocation and scheduling in the system and the basis of the operating system structure.

A process is an "execution Program in "

Three-state model of the state of the process

In a multiprogramming system, processes run alternately on the processor, and the state changes continuously.

  • Running: When a process is running on the processor, it is said to be in a running state. The number of processes in this state is less than or equal to the number of processors. For a single-processor system, there is only one process in the running state. When no other processes can be executed (for example, all processes are blocked), the system's idle process is usually automatically executed.

  • Ready: When a process has obtained all resources except the processor and can run once it obtains the processor, it is said to be in a ready state. The ready state can be queued according to multiple priorities. For example, when a process enters the ready state due to the time slice running out, it is placed in a low-priority queue; when a process enters the ready state due to completion of an I/O operation, it is placed in a high-priority queue.

  • Blocking: Also called waiting or sleeping state, a process is waiting for a certain event to occur (such as requesting I/O and waiting for I/O to complete, etc.) and temporarily stops running. This Even if the processor is allocated to the process, it cannot run, so the process is said to be in a blocked state.

What is a thread

Due to the user’s concurrent requests, it is obviously OK to create a process for each request. It doesn't make sense, from the perspective of system resource overhead or the efficiency of responding to user requests. Therefore, the concept of threads in the operating system was introduced.

Threads are sometimes called lightweight processes and are the smallest unit of program execution flow.

A thread is an entity in the process and is the basic unit that is independently scheduled and allocated by the system. The thread itself does not own system resources, only some resources that are essential for operation, but it can belong to the same process as Other threads share all resources owned by the process.

A thread can create and cancel another thread, and multiple threads in the same process can execute concurrently.

A thread is a single sequential control process in a program. A relatively independent and schedulable execution unit within a process. It is the basic unit for the system to independently schedule and allocate CPUs. It refers to the scheduling unit of a running program.

Thread three states

  • Ready state: The thread has all the conditions for running, can logically run, and is waiting for the processor.

  • Running status: The thread occupying the processor is running.

  • Blocking state: The thread is waiting for an event (such as a semaphore) and is logically unexecutable.

What is a coroutine?

Coroutine is a lightweight thread in user mode. Scheduling is completely under user control. Coroutines have their own register context and stack. When the coordination schedule switches, the register context and stack are saved to other places. When switching back, the previously saved register context and stack are restored. Directly operating the stack basically has no overhead of kernel switching, and global variables can be accessed without locking. , so context switching is very fast.

What is the difference between thread and process?

  • #1. A thread is an execution unit within a process. There is at least one thread in a process. They share the address space of the process, and the process has its own independent address. space.

  • 2. The process is the unit of resource allocation and ownership. Threads in the same process share the resources of the process.

  • 3. Thread is the basic unit of processor scheduling, but process is not

  • 4. Both can be executed concurrently

  • 5. Each independent thread has an entry point for program execution, a sequential execution sequence and an exit point for the program. However, threads cannot be executed independently and must be dependent on the application program. The application program provides multiple thread execution control.

#What is the difference between threads and coroutines?

  • 1. A thread can have multiple coroutines, and a process can also have multiple coroutines independently

  • 2. Thread processes are all synchronous mechanisms, while coroutines are asynchronous

  • 3. Coroutines can retain the state of the last call. Each time the process re-enters, it is equivalent to To enter the state of the last call

What is multi-process?

At the same time, if two or more processes are allowed to be running in the same computer system, this means that multiple processes open one more process and allocate one more resource. , inter-process communication is inconvenient

#What is multi-threading?

Threads divide a process into many slices. Each slice can be an independent process. The difference from multi-process is that only the resources of one process will be used, and threads can communicate with each other.

What are the differences between multiple concepts?

  • Single process and single thread: one person eating at a table

  • Single process and multi-thread: multiple people Eating food at a table

  • Multiple processes and single thread: multiple people each eating food at their own table

Synchronous blocking model

Multiple processes: The earliest server-side programs solved the problem of concurrent IO through multi-processes and multi-threads. A request creates a process, and then the child process enters the loop synchronization Interact with the client connection in a blocked manner, send and receive processing data.

Steps

  • Create a socket

  • Enter the while loop and block on the process accept operation , wait for the client connection to enter the main process and create a child process through fork under the multi-process model.

Sub-threads can be created in multi-thread mode

After the sub-thread/thread is successfully created, it enters the while loop, blocks on the recv call, and waits for the client to send data to the server.

After receiving the data, the server program processes it and then uses send to send a response to the client.

When the client connection is closed, the child process/thread exits and destroys all resources. The main process/thread will recycle this child process/thread.

This model relies heavily on the number of processes to solve concurrency problems.

Starting a large number of processes will bring additional process scheduling consumption

Asynchronous non-blocking model

Now various high The server programs for concurrent asynchronous IO are all implemented based on epoll.

IO multiplexing asynchronous non-blocking programs use the classic Reactor model. Reactor, as the name suggests, means reactor, and it does not process any data sending and receiving. You can only monitor event changes of a socket handle.

Reactor model:

- add:添加一个socket到reactor
- set:修改socket对应的事件,如可读可写
- del:从reactor中移除
- callback:事件发生后回掉指定的函数

nginx: Multi-threaded Reactor

swoole: Multi-threaded Reactor multi-process worker

PHP concurrent programming practice


  • ##1.php's swoole extension, parallel, high-performance network communication engine, written in pure C language, provides asynchronous multi-threading in PHP language Server, asynchronous tcp/udp network client, asynchronous mysql, asynchronous redis, database connection pool, AsyncTask, message queue, millisecond timer, asynchronous file reading and writing, asynchronous dns query.


  • 2. In addition to the support of asynchronous IO, swoole has designed multiple concurrent data structures and IPC communication mechanisms for PHP multi-process mode, which can greatly simplify multi-thread concurrency. Programming work

  • 3.swoole2.0 supports coroutines similar to Go language, and can use fully synchronous code to implement asynchronous programs

  • 4. Message queue

  • 5. Application decoupling

  • Scenario description: After the user places an order, the order system needs to notify the inventory system.

  • If the inventory system cannot be accessed, the order reduction inventory will fail, resulting in order failure

  • Decoupling the order system from the inventory system

  • Reference Queue

  • After the user places an order, the order system completes persistence processing, writes the message to the message queue, and returns the user's order successfully

  • Subscribe to the order information and use the pull/push method to obtain the order information. The inventory system performs inventory operations based on the order information

  • 6. Traffic peak-cutting application scenarios: flash sales activities, traffic surges instantly, and the server is under heavy pressure. After the user initiates a request, the server receives it and writes it to the message queue first. If the message queue length exceeds the maximum value, an error will be reported directly or the user will be prompted to control the request volume and alleviate high traffic

  • 7.日志处理 应用场景:解决大量日志的传输 日志采集程序将程序写入消息队列,然后通过日志处理程序的订阅消费日志。

  • 8.消息通讯 聊天室

  • 9.常见消息队列产品 kafka,ActiveMQ,ZeroMQ,RabbitMQ,Redis等 php的异步 消息队列

  • 10.接口的并发请求 curl_multi_init

mysql缓存层的优化


1.什么是数据库缓存

mysql等一些常见的关系型数据库的数据都存储在磁盘当中,在高并发场景下,业务应用对mysql产生的增删,改,查的操作造成巨大的I/O开销和查询压力,这无疑对数据库和服务器都是一种巨大的压力,为了解决此类问题,缓存数据的概念应运而生。

  • 极大的解决数据库服务器的压力

  • 提高应用数据的响应速度

常见的缓存形式:内存缓存和文件缓存

2.为什么要使用数据库缓存

  • 缓存数据是为了让客户端很少甚至不访问数据库服务器进行数据的查询,高并发下,能最大程序地降低对数据库服务器的访问压力。

  • 用户请求-》数据查询-》连接数据库服务器并查询数据-》将数据缓存起来(html,内存,json,序列化数据)-》显示给客户端

  • 缓存方式的选择

  • 缓存场景的选择

  • 缓存数据的实时性

  • 缓存数据的稳定性

3.使用mysql查询缓存

  • 启用mysql查询缓存

  • 极大的降低cpu使用率

  • query_cache_type查询缓存类型,有0,1,2三个取值。0则不适用查询缓存。1表示始终使用查询缓存,2表示按需使用查询缓存。

query_cahce_type=1 select SQL_NO_CACHE * from my_table where condition; query_cache_type=2 select SQL_CACHE * from my_table where condition; query_cache_size

默认情况下query_cache_size为0,表示为查询缓存预留的内存为0,则无法使用查询缓存 SET GLOBAL query_cache_size = 134217728; 查询缓存可以看作是SQL文本和查询结果的映射 第二次查询的SQL和第一次查询的SQL完全相同,则会使用缓 SHOW STATUS LIKE ‘Qcache_hits’查看命中次数 表的结构和数据发生改变时,查询缓存中的数据不再有效

情理缓存:

  • FLUSH QUERY CACHE;//清理查询缓存内存碎片

  • RESET QUERY CACHE;//从查询缓存中移出所有查询

  • FLUSH TABLES;//关闭所有打开的表,同时该操作将会清空查询缓存中的内容

4.使用Memcache缓存

对于大型站点,如果没有中间缓存层,当流量打入数据库层时,即便有之前的几层为我们挡住一部分流量,但是在大并发的情况下,还是会有大量请求涌入数据库层,这样对于数据库服务器的压力冲击很大,响应速度也会下降,因此添加中间缓存层很有必要。

memcache是一套分布式的高速缓存系统,由liveJournal的BrandFitzpatrick开发,但目前被许多网站使用以提升网站的访问速度,尤其对于一些大型的、需要频繁访问数据库的网站访问速度提升效果十分显著。 memcache是一个高性能的分布式的内存对象缓存系统,通过在内存里维护一个统一的巨大的hash表,它能够用来存储各种格式的数据,包括图像,视频、文件以及数据库检索的结果等。简单的说就是将数据调用到内存,然后从内存中读取,从而大大提高读取速度。

工作流程:先检查客户端的请求数据是否在memcache中,如有,直接把请求数据返回,不再对数据库进行任何操作;如果请求的数据不在memcache中,就去查数据库,把从数据库中获取的数据返回给客户端,同时把数据缓存一份到memcached中。

通用缓存机制:用查询的方法名+参数作为查询时的key,value对中的key值

5.使用Redis缓存

与memcache的区别:

  • 性能相差不大

  • redis在2.0版本后增加了自己的VM特性,突破物理内存的限制,memcache可以修改最大可用内存,采用LRU算法

  • redis依赖客户端来实现分布式读写

  • memcache本身没有数据冗余机制

  • redis支持(快照,aof)依赖快照进行持久化aof增强了可靠性的同时,对性能有所影响

  • redis用户数据量较小的高性能操作和运算上

  • memcache用于在动态系统中减少数据库负载,提升性能;适合做缓存提高性能。

  • 可用于存储其他数据:session,session_set_save_handler

Optimization of mysql data layer


  • Data table data type optimization: int, smallint., bigint, enum, IP storage uses int type ip2long to convert and store

  • The more indexes the better, create appropriate indexes on appropriate fields

  • Comply with the prefix principle of the index

  • Like query% problem

  • Full table scan optimization

  • or conditional index usage

  • String type index failure problem

  • Optimize data access during data query, use limit, Try not to use *, make complex into simple, split queries, decompose related queries *

  • Optimize specific types of query statements, optimize count(), optimize related query statements, optimize subqueries, Optimize group by and distinct, optimize limit and union

  • Optimization of storage engine: try to use innodb

  • Optimization of database table structure: partition operation (Transparent to users) partition, sub-database and table (horizontal split, vertical split to create secondary table)

  • Optimization of database server architecture: master-slave replication, read-write separation, dual Active hot standby, load balancing (lvs realizes load balancing, MyCat database middleware realizes load balancing)

Statement:
This article is reproduced at:github.io. If there is any infringement, please contact admin@php.cn delete