Home > Article > Backend Development > nginx--Optimization configuration example
The number of nginx processes, it is recommended to specify it according to the number of cpu, usually it multiples.
Allocate CPU to each process. In the above example, 8 processes are assigned to 8 CPUs. Of course, you can write multiple ones, or assign one process to multiple CPUs.
This command refers to the maximum number of file descriptors opened by an nginx process. The theoretical value should be the maximum number of open files (ulimit -n) divided by the number of nginx processes. However, nginx allocation requests are not so uniform, so the final It is best to keep the value consistent with ulimit -n.
Using epoll’s I/O model, it goes without saying.
The maximum number of connections allowed per process. Theoretically, the maximum number of connections per nginx server is worker_processes*worker_connections.
keepalive timeout.
The buffer size of the client request header. This can be set according to your system paging size. Generally, the header size of a request will not exceed 1k. However, since the general system paging is larger than 1k, it is set here to Paging Size. The paging size can be obtained with the command getconf PAGESIZE.
This will specify the cache for open files. It is not enabled by default. max specifies the number of caches. It is recommended to be consistent with the number of open files. Inactive refers to how long the file has not been requested before the cache is deleted.
This refers to how often to check cached valid information.
The minimum number of times the file is used during the inactive parameter in the open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache. As in the above example, if a file has not been used once within the inactive time, it will be removed.
The number of timewait, the default is 180000.
The range of ports the system is allowed to open.
Enable timewait for fast recycling.
Enable reuse. Allows TIME-WAIT sockets to be reused for new TCP connections.
Enable SYN Cookies. When the SYN waiting queue overflows, enable cookies to handle it.
The backlog of the listen function in the web application will limit the net.core.somaxconn of our kernel parameters to 128 by default, and the NGX_LISTEN_BACKLOG defined by nginx defaults to 511, so it is necessary to adjust this value.
The maximum number of packets allowed to be queued when each network interface receives packets faster than the kernel can process them.
The maximum number of TCP sockets in the system that are not associated with any user file handle. If this number is exceeded, the orphan connection will be reset immediately and a warning message will be printed. This limit is only to prevent simple DoS attacks. You cannot rely too much on it or artificially reduce this value. You should increase this value (if you increase memory).
The maximum number of recorded connection requests that have not yet received acknowledgment from the client. For systems with 128M of memory, the default value is 1024, and for systems with small memory, it is 128.
Time stamp can avoid serial number wrapping. A 1Gbps link will definitely encounter sequence numbers that have been used before. The timestamp allows the kernel to accept such "abnormal" packets. It needs to be turned off here.
In order to open a connection to the peer, the kernel needs to send a SYN with an ACK in response to the previous SYN. This is the second handshake in the so-called three-way handshake. This setting determines the number of SYN+ACK packets sent by the kernel before giving up the connection.
The number of SYN packets sent before the kernel gives up establishing the connection.
If the socket is requested to be closed by the local end, this parameter determines how long it remains in the FIN-WAIT-2 state. The peer can make errors and never close the connection, or even crash unexpectedly. The default value is 60 seconds. The usual value for 2.2 kernel is 180 seconds. You can press this setting, but remember that even if your machine is a lightly loaded WEB server, there is a risk of memory overflow due to a large number of dead sockets. FIN- WAIT-2 is less dangerous than FIN-WAIT-1 because it can only eat up to 1.5K of memory, but their survival period is longer.
When keepalive is enabled, the frequency of TCP sending keepalive messages. The default is 2 hours.
这个指令为FastCGI缓存指定一个路径,目录结构等级,关键字区域存储时间和非活动删除时间。
指定连接到后端FastCGI的超时时间。
向FastCGI传送请求的超时时间,这个值是指已经完成两次握手后向FastCGI传送请求的超时时间。
接收FastCGI应答的超时时间,这个值是指已经完成两次握手后接收FastCGI应答的超时时间。
指定读取FastCGI应答第一部分需要用多大的缓冲区,这里可以设置为fastcgi_buffers指令指定的缓冲区大小,上面的指令指定它将使用1 个16k的缓冲区去读取应答的第一部分,即应答头,其实这个应答头一般情况下都很小(不会超过1k),但是你如果在fastcgi_buffers指令中 指定了缓冲区的大小,那么它也会分配一个fastcgi_buffers指定的缓冲区大小去缓存。
指定本地需要用多少和多大的缓冲区来缓冲FastCGI的应答,如上所示,如果一个php脚本所产生的页面大小为256k,则会为其分配16个16k的缓 冲区来缓存,如果大于256k,增大于256k的部分会缓存到fastcgi_temp指定的路径中,当然这对服务器负载来说是不明智的方案,因为内存中 处理数据速度要快于硬盘,通常这个值的设置应该选择一个你的站点中的php脚本所产生的页面大小的中间值,比如你的站点大部分脚本所产生的页面大小为 256k就可以把这个值设置为16 16k,或者4 64k 或者64 4k,但很显然,后两种并不是好的设置方法,因为如果产生的页面只有32k,如果用4 64k它会分配1个64k的缓冲区去缓存,而如果使用64 4k它会分配8个4k的缓冲区去缓存,而如果使用16 16k则它会分配2个16k去缓存页面,这样看起来似乎更加合理。
这个指令我也不知道是做什么用,只知道默认值是fastcgi_buffers的两倍。
在写入fastcgi_temp_path时将用多大的数据块,默认值是fastcgi_buffers的两倍。
开启FastCGI缓存并且为其制定一个名称。个人感觉开启缓存非常有用,可以有效降低CPU负载,并且防止502错误。但是这个缓存会引起很多问题,因为它缓存的是动态页面。具体使用还需根据自己的需求。
为指定的应答代码指定缓存时间,如上例中将200,302应答缓存一小时,301应答缓存1天,其他为1分钟。
缓存在fastcgi_cache_path指令inactive参数值时间内的最少使用次数,如上例,如果在5分钟内某文件1次也没有被使用,那么这个文件将被移除。
不知道这个参数的作用,猜想应该是让nginx知道哪些类型的缓存是没用的。 以上为nginx中FastCGI相关参数,另外,FastCGI自身也有一些配置需要进行优化,如果你使用php-fpm来管理FastCGI,可以修改配置文件中的以下值:
同时处理的并发请求数,即它将开启最多60个子线程来处理并发连接。
最多打开文件数。
每个进程在重置之前能够执行的最多请求数。
以上就介绍了nginx--优化配置示例,包括了方面的内容,希望对PHP教程有兴趣的朋友有所帮助。