Why is nginx faster than apache?
Let’s talk about a few concepts in general:
1: nginx is faster than apache under high concurrency, and low concurrency is not obvious
2: The reason why it is fast is due to the epoll model of nginx
apache is multi-threaded or multi-process. When working, when an http response comes, one process receives it (listen) –>Identification processing—>Return request, during this process, a process processes all, apche reads or writes for socket I/O, but reading or writing are blocked, blocking means The process has to be suspended and enter the sleep state. Once there are many connections, Apache will inevitably generate more processes to respond to the request. Once there are many processes, the CPU will switch processes frequently, which consumes resources and time, so it leads to Apache performance has declined. To put it bluntly, it cannot handle so many processes.
Nginx adopts the epoll model, which is asynchronous and non-blocking. For Nginx, a complete connection request processing is divided into events, one event at a time. For example, accept(), receive(), disk I/O, send(), etc. Each part has a corresponding module to process. A complete request may be processed by hundreds of modules. The real core is the event collection and distribution module, which is the core of managing all modules.
Only the scheduling of the core module can allow the corresponding module to occupy CPU resources to process the request. Take an HTTP request as an example. First, register the listening event of interest in the event collection and distribution module. After registration, return directly without blocking. Then you don’t need to worry about it anymore. The kernel will notify you when a connection comes (epoll’s turn). The query will tell the process), and the CPU can handle other things.
Once a request comes, the corresponding context is assigned to the entire request (in fact, it has been allocated in advance). At this time, new events of interest (read function) are registered. Similarly, when client data comes, the kernel will The process is automatically notified that the data can be read. After reading the data, it is parsed. After parsing, it goes to the disk to find resources (I/O). Once the I/O is completed, the process is notified and the process starts sending data back to the client send(). It is not blocking at this time. After calling, just wait for the kernel to send back the notification result.
The entire request is divided into many stages. Each stage is registered with many modules and then processed, all asynchronously and non-blocking. Asynchronous here refers to doing something without waiting for the result to be returned. It will automatically notify you when it is done.
I found an example on the Internet:
You can give a simple example to illustrate the workflow of Apache. We usually go to a restaurant to eat. The working model of the restaurant is that one waiter serves the customer all the time. The process is as follows. The waiter waits for the guest at the door (listen). When the guest arrives, he greets the arranged table (accept), waits for the customer to order (request uri), and goes to the kitchen to call the chef. Place an order for cooking (disk I/O), wait for the kitchen to be ready (read), and then serve the dishes to the guests (send). The waiter (process) is blocked in many places.
In this way, when there are more guests (more HTTP requests), the restaurant can only call more waiters to serve (fork process). However, since the restaurant resources are limited (CPU), once there are too many waiters, there will be management costs. Very high (CPU context switching), thus entering a bottleneck.
Let’s see how Nginx handles it? Hang a doorbell at the door of the restaurant (register the listen of the epoll model). Once a guest (HTTP request) arrives, a waiter is sent to receive it (accept). After that, the waiter goes to do other things (such as receiving guests again) and waits for this guest. After the guest orders the meal, he calls the waiter (the data arrives in read()). The waiter comes and takes the menu to the kitchen (disk I/O). The waiter goes to do other things. When the kitchen is ready, he calls the waiter (disk I/O). O end), the waiter will serve the dishes to the guests (send()), the kitchen will serve one dish to the guests after it is ready, and the waiters can do other things in the middle.
The entire process is divided into many stages, and each stage has a corresponding service module. Let's think about it, so that once there are more guests, the restaurant can also accommodate more people.
For more Nginx technical articles, please visit the Nginx usage tutorial column!
The above is the detailed content of Why nginx is faster than apache. For more information, please follow other related articles on the PHP Chinese website!

The reason why NGINX is popular is its advantages in speed, efficiency and control. 1) Speed: Adopt asynchronous and non-blocking processing, supports high concurrent connections, and has strong static file service capabilities. 2) Efficiency: Low memory usage and powerful load balancing function. 3) Control: Through flexible configuration file management behavior, modular design facilitates expansion.

The differences between NGINX and Apache in terms of community, support and resources are as follows: 1. Although the NGINX community is small, it is active and professional, and official support provides advanced features and professional services through NGINXPlus. 2.Apache has a huge and active community, and official support is mainly provided through rich documentation and community resources.

NGINXUnit is an open source application server that supports a variety of programming languages and frameworks, such as Python, PHP, Java, Go, etc. 1. It supports dynamic configuration and can adjust application configuration without restarting the server. 2.NGINXUnit supports multi-language applications, simplifying the management of multi-language environments. 3. With configuration files, you can easily deploy and manage applications, such as running Python and PHP applications. 4. It also supports advanced configurations such as routing and load balancing to help manage and scale applications.

NGINX can improve website performance and reliability by: 1. Process static content as a web server; 2. forward requests as a reverse proxy server; 3. allocate requests as a load balancer; 4. Reduce backend pressure as a cache server. NGINX can significantly improve website performance through configuration optimizations such as enabling Gzip compression and adjusting connection pooling.

NGINXserveswebcontentandactsasareverseproxy,loadbalancer,andmore.1)ItefficientlyservesstaticcontentlikeHTMLandimages.2)Itfunctionsasareverseproxyandloadbalancer,distributingtrafficacrossservers.3)NGINXenhancesperformancethroughcaching.4)Itofferssecur

NGINXUnit simplifies application deployment with dynamic configuration and multilingual support. 1) Dynamic configuration can be modified without restarting the server. 2) Supports multiple programming languages, such as Python, PHP, and Java. 3) Adopt asynchronous non-blocking I/O model to improve high concurrency processing performance.

NGINX initially solved the C10K problem and has now developed into an all-rounder who handles load balancing, reverse proxying and API gateways. 1) It is well-known for event-driven and non-blocking architectures and is suitable for high concurrency. 2) NGINX can be used as an HTTP and reverse proxy server, supporting IMAP/POP3. 3) Its working principle is based on event-driven and asynchronous I/O models, improving performance. 4) Basic usage includes configuring virtual hosts and load balancing, and advanced usage involves complex load balancing and caching strategies. 5) Common errors include configuration syntax errors and permission issues, and debugging skills include using nginx-t command and stub_status module. 6) Performance optimization suggestions include adjusting worker parameters, using gzip compression and

Diagnosis and solutions for common errors of Nginx include: 1. View log files, 2. Adjust configuration files, 3. Optimize performance. By analyzing logs, adjusting timeout settings and optimizing cache and load balancing, errors such as 404, 502, 504 can be effectively resolved to improve website stability and performance.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 English version
Recommended: Win version, supports code prompts!

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

Dreamweaver Mac version
Visual web development tools

Atom editor mac version download
The most popular open source editor
