search
HomeBackend DevelopmentPHP Tutorialnginx core architecture overview

Before graduation, after completing the project, I got into socket programming for a while and used the Qt framework in C++ to write toy-like TCP and UDP communication clients. When chatting with my direct senior on the phone, I was advised to dig deeper into sockets and try to take the back-end or architect route. When asked how to dig deeper, the answer is to study the source code. If you want to learn socket-related knowledge, studying the server source code is the most appropriate. As for which server to choose, after careful consideration and investigation, I found that compared to the heavier and bulkier apache, nginx is smaller and more excellent. So before starting to officially eat the source code, I first started some self-popularization work.

1, process model

First of all, by default, like other servers, nginx under Unix also continues to run in the background in the form of daemon (daemon process). Although nginx can also turn off the background mode for debugging purposes and use the foreground mode, you can even cancel the master process through configuration (will be explained in detail later), so that nginx can be used as a single process Work. But these have little to do with the architecture that nginx is proud of, so I won’t list them here. Although nginx also supports multi-threading, we still focus on understanding its default multi-process mode.

  nginx will create a master process (main process) and several worker processes (slave processes) after startup. The master process is mainly responsible for managing the worker process, specifically receiving signals from the administrator and forwarding them to the corresponding worker process; monitoring the working status of the worker process, in workerRe-create and start the worker process when the process terminates abnormally. The worker process is responsible for handling basic network events. workerThe processes have equal priorities and are independent of each other. They compete fairly for requests from clients. Each request is processed by only one worker process. The schematic diagram of the nginx process model is shown in Figure 1. N g g1 Nginx

Process model Significant diagram

The number of Worker processes can be set. Generally, the setting is consistent with the number of CPU

cores. Relevant to the event processing model. We will continue to introduce the event processing model of

nginx later. 2. Signals and requests nginx interacts with the outside world through two interfaces: signals from the administrator and requests from the client. Below we give examples to illustrate how nginx handles signals and requests.

To control

nginx

, the administrator needs to communicate with the master process, just send a command signal to the

master

process. For example, nginx used the kill -HUP [pid] command to restart nginx before the 0.8 version. Using this command to restart nginx will achieve a graceful restart process without service interruption. After receiving the HUP command, the master process will first reload the configuration file, then start a new worker process, and send a stop signal to the old worker process. At this time, the new worker process begins to receive network requests, and the old worker process stops receiving new requests. After the current request is processed, the old worker process exits and is destroyed. After version 0.8, nginx introduced a series of command line parameters to facilitate server management, such as ./nginx -s reload and ./nginx -s stop, respectively. Used to restart and stop nginx. When executing the operation command, we actually start a new nginx process. After parsing the parameters in the command, this process sends the corresponding signal to the master process on its own, achieving the same goal as sending the signal manually before. Effect. 3. Requests and events

The server most often handles requests from the 80porthttpprotocol. Let’s take this as an example to illustrate the process of nginx processing requests. First of all, every worker process is formed from the master process fork (forked). In the master process, the socket that needs to be monitored is first established. (socket, i.e. IPaddress+port number) and the corresponding listenfd (listening file descriptor or handle). We know that each process in socket communication must be assigned a port number, and the socket allocation work of the worker process is completed by the master process. The listenfd of all worker processes become readable when a new connection arrives. To ensure that only one worker process handles the connection, each worker process is registered with listenfd Before reading the event, you must first grab the accept_mutex (accept connection mutex). After a worker process successfully grabs the connection, it starts to read the request, parse the request, process the request and feed back the data to the client.

4. Process model analysis

nginx uses but not only uses the multi-process request processing model (PPC). Each worker process only processes one request at a time, making the resources between requests independent. Locking is required, and requests can be processed in parallel without affecting each other. A request processing failure causes a worker process to exit abnormally, which will not interrupt the service. Instead, the master process will immediately restart a new worker process, reducing the overall risk faced by the server. , making the service more stable. However, compared with the multi-threaded model (TPC), the system overhead is slightly larger and the efficiency is slightly lower, which requires other means to improve.

5. High concurrency mechanism of nginx——Asynchronous non-blocking event mechanism

The event processing mechanism of IIS is multi-threaded, and each request has an exclusive working thread. Since multi-threading takes up more memory, the CPU overhead caused by context switching between threads (repeated operations of protecting and restoring the register group) is also very large. Servers with multi-threading mechanisms are facing thousands of concurrency When the amount of data is increased, it will put a lot of pressure on the system, and high concurrency performance is not ideal. Of course, if the hardware is good enough and can provide sufficient system resources, system pressure will no longer be a problem.

Let’s go deep into the system level to discuss the differences between multi-process and multi-thread, blocking mechanism and non-blocking mechanism.

Students who are familiar with operating systems should understand that the emergence of multi-threading is to more fully schedule and use CPU when resources are sufficient, and it is especially beneficial to improve the utilization of multi-core CPU. However, threads are the smallest unit of system tasks, and processes are the smallest units of system resource allocation. This means that multi-threading will face a big problem: when the number of threads increases and resource requirements increase, the parent processes of these threads may It is impossible to immediately apply for enough resources for all threads in one go. When the system does not have enough resources to satisfy a process, it will choose to make the entire process wait. At this time, even if the system resources are sufficient for some threads to work normally, the parent process cannot apply for these resources, causing all threads to wait together. To put it bluntly, with multi-threading, threads within a process can be scheduled flexibly (although the risk of thread deadlock and the cost of thread switching are increased), but there is no guarantee that the parent process can still be in the system when it becomes larger and heavier. Get reasonable scheduling. It can be seen that multi-threading can indeed improve the utilization of CPU, but it is not an ideal solution to solve the problem of high concurrent requests on the server, not to mention the high utilization of CPU under high concurrency. maintain. The above is the multi-threaded synchronous blocking event mechanism of IIS.

The multi-process mechanism of nginx ensures that each request applies for system resources independently. Once the conditions are met, each request can be processed immediately, that is, asynchronous non-blocking processing. However, the resource overhead required to create a process will be more than that of threads. In order to save the number of processes, nginx uses some process scheduling algorithms to make I/O event processing not only rely on multi-process mechanism, but asynchronous Non-blocking multi-process mechanism. Next, we will introduce the asynchronous non-blocking event processing mechanism of nginx in detail.

6、epoll

 Under Linux, a high-performance network with high concurrency must epoll, nginx also uses the epoll model as the processing mechanism for network events. Let’s first take a look at how epoll came about.

The earliest scheduling solution is the asynchronous busy polling method, that is, continuous polling of I/O events, which is to traverse the access status of the socket collection. Obviously, this solution causes unnecessary traffic when the server is idle. CPUoverhead. E Later, leSelect

and

Poll as agents of the scheduling process and improved CPU utils appeared. Literally, one was " " , One is "Poll", they are essentially the same, they are polling socket to collect and process requests. The difference from before is that they can monitor I/O event, the polling thread will be blocked when idle, and be awakened when one or more I/O events arrive, getting rid of the "busy polling"'s " Busy", becomes an asynchronous polling method. The select/poll model polls the entire FD (file descriptor) collection, that is, the socket collection. The network event processing efficiency decreases linearly with the number of concurrent requests, so a macro is used to limit it. Maximum number of concurrent connections. At the same time, the communication method between the kernel space and user space of the select/poll model is memory copy, which brings high overhead. The above shortcomings have led to the creation of new models.  epoll

can be considered as the abbreviation of

event poll, is the Linux kernel that has been improved to handle large batches of file descriptors poll, it is Linux bet long An enhanced version of the road multiplexing I/O interface select/poll, which can significantly improve the system CPU utilization when the program has only a small number of active connections among a large number of concurrent connections. First of all, epoll has no limit on the maximum number of concurrent connections. The upper limit is the maximum number of files that can be opened, which is related to the hardware memory size. On a 1GB machine, it is about 10w; and then The most significant advantage of epoll, it only operates on "active" 's socket, because only those are asynchronously awakened by the kernel I/O read and write events The socket is put into the ready queue and is ready to be processed by the worker process. This saves a lot of polling overhead in the actual production environment and greatly improves the efficiency of event processing; Finally, epoll uses shared memory (MMAP) to realize communication between kernel space and user space, eliminating the overhead of memory copying. In addition, the ET (edge ​​trigger) working mode of epoll used in nginx is the fast working mode. ET mode only supports non-blocking socket, FD is ready, that is, the kernel sends a notification through epoll, and after certain operations, FD is no longer ready. Notifications will also be sent when in the status, but if there has been no I/O operation causing FD to become not ready, notifications will no longer be sent. In general, nginx under Linux is event-based and uses epoll to handle network events.

Copyright Statement: This article is an original article by the blogger and may not be reproduced without the blogger's permission.

The above has introduced an overview of the core architecture of nginx, including aspects of it. I hope it will be helpful to friends who are interested in PHP tutorials.

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
解决方法:您的组织要求您更改 PIN 码解决方法:您的组织要求您更改 PIN 码Oct 04, 2023 pm 05:45 PM

“你的组织要求你更改PIN消息”将显示在登录屏幕上。当在使用基于组织的帐户设置的电脑上达到PIN过期限制时,就会发生这种情况,在该电脑上,他们可以控制个人设备。但是,如果您使用个人帐户设置了Windows,则理想情况下不应显示错误消息。虽然情况并非总是如此。大多数遇到错误的用户使用个人帐户报告。为什么我的组织要求我在Windows11上更改我的PIN?可能是您的帐户与组织相关联,您的主要方法应该是验证这一点。联系域管理员会有所帮助!此外,配置错误的本地策略设置或不正确的注册表项也可能导致错误。即

Windows 11 上调整窗口边框设置的方法:更改颜色和大小Windows 11 上调整窗口边框设置的方法:更改颜色和大小Sep 22, 2023 am 11:37 AM

Windows11将清新优雅的设计带到了最前沿;现代界面允许您个性化和更改最精细的细节,例如窗口边框。在本指南中,我们将讨论分步说明,以帮助您在Windows操作系统中创建反映您的风格的环境。如何更改窗口边框设置?按+打开“设置”应用。WindowsI转到个性化,然后单击颜色设置。颜色更改窗口边框设置窗口11“宽度=”643“高度=”500“>找到在标题栏和窗口边框上显示强调色选项,然后切换它旁边的开关。若要在“开始”菜单和任务栏上显示主题色,请打开“在开始”菜单和任务栏上显示主题

如何在 Windows 11 上更改标题栏颜色?如何在 Windows 11 上更改标题栏颜色?Sep 14, 2023 pm 03:33 PM

默认情况下,Windows11上的标题栏颜色取决于您选择的深色/浅色主题。但是,您可以将其更改为所需的任何颜色。在本指南中,我们将讨论三种方法的分步说明,以更改它并个性化您的桌面体验,使其具有视觉吸引力。是否可以更改活动和非活动窗口的标题栏颜色?是的,您可以使用“设置”应用更改活动窗口的标题栏颜色,也可以使用注册表编辑器更改非活动窗口的标题栏颜色。若要了解这些步骤,请转到下一部分。如何在Windows11中更改标题栏的颜色?1.使用“设置”应用按+打开设置窗口。WindowsI前往“个性化”,然

OOBELANGUAGE错误Windows 11 / 10修复中出现问题的问题OOBELANGUAGE错误Windows 11 / 10修复中出现问题的问题Jul 16, 2023 pm 03:29 PM

您是否在Windows安装程序页面上看到“出现问题”以及“OOBELANGUAGE”语句?Windows的安装有时会因此类错误而停止。OOBE表示开箱即用的体验。正如错误提示所表示的那样,这是与OOBE语言选择相关的问题。没有什么可担心的,你可以通过OOBE屏幕本身的漂亮注册表编辑来解决这个问题。快速修复–1.单击OOBE应用底部的“重试”按钮。这将继续进行该过程,而不会再打嗝。2.使用电源按钮强制关闭系统。系统重新启动后,OOBE应继续。3.断开系统与互联网的连接。在脱机模式下完成OOBE的所

Windows 11 上启用或禁用任务栏缩略图预览的方法Windows 11 上启用或禁用任务栏缩略图预览的方法Sep 15, 2023 pm 03:57 PM

任务栏缩略图可能很有趣,但它们也可能分散注意力或烦人。考虑到您将鼠标悬停在该区域的频率,您可能无意中关闭了重要窗口几次。另一个缺点是它使用更多的系统资源,因此,如果您一直在寻找一种提高资源效率的方法,我们将向您展示如何禁用它。不过,如果您的硬件规格可以处理它并且您喜欢预览版,则可以启用它。如何在Windows11中启用任务栏缩略图预览?1.使用“设置”应用点击键并单击设置。Windows单击系统,然后选择关于。点击高级系统设置。导航到“高级”选项卡,然后选择“性能”下的“设置”。在“视觉效果”选

Windows 11 上的显示缩放比例调整指南Windows 11 上的显示缩放比例调整指南Sep 19, 2023 pm 06:45 PM

在Windows11上的显示缩放方面,我们都有不同的偏好。有些人喜欢大图标,有些人喜欢小图标。但是,我们都同意拥有正确的缩放比例很重要。字体缩放不良或图像过度缩放可能是工作时真正的生产力杀手,因此您需要知道如何对其进行自定义以充分利用系统功能。自定义缩放的优点:对于难以阅读屏幕上的文本的人来说,这是一个有用的功能。它可以帮助您一次在屏幕上查看更多内容。您可以创建仅适用于某些监视器和应用程序的自定义扩展配置文件。可以帮助提高低端硬件的性能。它使您可以更好地控制屏幕上的内容。如何在Windows11

10种在 Windows 11 上调整亮度的方法10种在 Windows 11 上调整亮度的方法Dec 18, 2023 pm 02:21 PM

屏幕亮度是使用现代计算设备不可或缺的一部分,尤其是当您长时间注视屏幕时。它可以帮助您减轻眼睛疲劳,提高易读性,并轻松有效地查看内容。但是,根据您的设置,有时很难管理亮度,尤其是在具有新UI更改的Windows11上。如果您在调整亮度时遇到问题,以下是在Windows11上管理亮度的所有方法。如何在Windows11上更改亮度[10种方式解释]单显示器用户可以使用以下方法在Windows11上调整亮度。这包括使用单个显示器的台式机系统以及笔记本电脑。让我们开始吧。方法1:使用操作中心操作中心是访问

如何修复Windows服务器中的激活错误代码0xc004f069如何修复Windows服务器中的激活错误代码0xc004f069Jul 22, 2023 am 09:49 AM

Windows上的激活过程有时会突然转向显示包含此错误代码0xc004f069的错误消息。虽然激活过程已经联机,但一些运行WindowsServer的旧系统可能会遇到此问题。通过这些初步检查,如果这些检查不能帮助您激活系统,请跳转到主要解决方案以解决问题。解决方法–关闭错误消息和激活窗口。然后,重新启动计算机。再次从头开始重试Windows激活过程。修复1–从终端激活从cmd终端激活WindowsServerEdition系统。阶段–1检查Windows服务器版本您必须检查您使用的是哪种类型的W

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software