Home >Backend Development >PHP Tutorial >Detailed explanation of Nginx configuration file

Detailed explanation of Nginx configuration file

WBOY
WBOYOriginal
2016-08-08 09:31:29745browse

Nginx Detailed configuration file explanation


user nginx ;

#user


worker_processes 8;

#Worker process, adjusted according to hardware , greater than or equal to cpucore number


error_log logs/nginx_error.log crit;

#Error log


pid logs/nginx.pid;

#pidPlacement


worker_rlimit_nofile 204800;

#Specify the maximum descriptor that the process can open

This command refers to the maximum number of file descriptors that a nginx process can open, the theoretical value It should be the maximum number of open files

(ulimit -n) divided by the number of nginx processes, but nginx Distributing requests isn't that even, so it's best to go with ulimit The value of -n remains consistent.

Now the number of open files under the linux 2.6 kernel is 65535, worker_rlimit_nofileshould be filled in accordingly 65535.

This is because nginxdistribution of requests to processes is not so balanced during scheduling, so if you fill in 10240 , total When the concurrency reaches 3-4, the process may exceed 10240. At this time, 502 will be returned. Error.


events


{

use epoll; ’s

I/O Model

Supplementary Notes:

with

apache similar, nginx

for different operations System, there are different event models

A) Standard event model

Select, poll

belongs to the standard event model. If there is no more effective method in the current system, nginx

will choose select or poll B) Efficient event model

Kqueue: Used in FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and

MacOS X.using dual processor

MacOS XSystem usage of kqueue may cause a kernel crash.

Epoll: is used in Linux kernel 2.6 version and later systems.

/dev/poll: used for Solaris 7 11/99+, HP/UX 11.22+ (eventport), IRIX 6.5.15+ and Tru64 UNIX 5.1A+.

Eventport: used for Solaris 10. In order to prevent kernel crashes, it is necessary to install security patches



worker_connections 204800;

#The maximum number of connections for a worker process is adjusted according to the hardware and used in conjunction with the previous worker process. Try to make it as large as possible, but don’t use Just run cpu to 100%

The maximum number of connections allowed per process, theoretically per nginx Maximum server The number of connections is worker_processes*worker_connections


keepalive_timeout 60;


keepalivetimeout.


client_header_buffer_size 4k;


Client request header The buffer size, this can be set according to your system paging size. Generally, the size of a request header will not exceed 1k, but since the general system paging is larger than 1k, so This is set to the paging size.

The paging size can be obtained with the command getconf PAGESIZE .

[root@web001 ~]# getconf PAGESIZE

4096

But there are cases where client_header_buffer_size exceeds 4k, but client_header_buffer_size This value must be set to an integral multiple of "system paging size".


open_file_cache max=65535 inactive=60s;


This will specify the cache for open files. It is not enabled by default. max specifies the number of caches. It is recommended to be consistent with the number of open files. inactive refers to how much time has passed. Delete the cache after the file is not requested.


open_file_cache_valid 80s;


How long does this mean? Check the cached information once every time.


open_file_cache_min_uses 1;


open_file_cache The inactive in the command is the minimum number of uses of the file within the parameter time. If this number is exceeded, the file descriptor will always be opened in the cache. As in the above example, if there is a file in inactiveIf it is not used once within the time, it will be removed.



) Set up the

httpserver and take advantage of its reverse The proxy function provides load balancing support


http{

include mime.types;

#

settings mime

type ,

type by m ime.type

file definition default_type application/octet-stream;log_format main '$host $status [$time_local] $remote_addr [$time_local] $request_uri '

'"$http_referer" "$http_user_agent" "$http_x_forwarded_for" '

'$bytes_sent $ request_time $sent_http_x_cache_hit';

log_format log404 '$status [$time_local] $remote_addr $host$request_uri $sent_http_location';

$remote_addr

with $http_x_forwarded_for

is used to record the client’s

ip address; $remote_user: Used to record the client user name;

$time_local: Used to record access time and time zone;

$request: The url

and

http protocols used to record requests; $status : Used to record the request status; success is 200

,

$body_bytes_s ent : Record the main content of the file sent to the client Size;

$http_referer: Used to record the link accessed from that page;

$http_user_agent : Record the relevant information of the customer’s browser;

Usually the server is placed behind the reverse proxy, so it cannot be obtained The customer's

IP address is obtained through $remote_add

. The IP

address is the reverse proxy server iP Address. The reverse proxy server can add x_forwarded_for information in the http header information of the forwarded request to record the original client's IP address and the original The server address requested by the client; access_log /dev/null; Usedlog_format After the command sets the log format, you need to use the access_log command to specify the storage path of the log file ;

# access_log /usr/local/nginx/ logs /access_log main;

server_names_hash_bucket_size 128;

#The hash table that holds server names is created by the directives servernames_hash_max_size and Controlled by server_names_hash_bucket_size. Parameters hash The bucket size is always equal to the size of the hash table, and is a multiple of the processor cache size. After reducing the number of accesses in memory, it is possible to speed up the search for hash table key values ​​in the processor. if hash The bucket size is equal to the size of the processor cache. Then when searching for a key, the number of searches in the memory in the worst case is 2. The first time is to determine the address of the storage unit, and the second time is to find the key in the storage unit. value. So if Nginxgiven the need to increase hash max size or hash bucket size tips, then the first thing is to increase the size of the previous parameter .


client_header_buffer_size 4k;

The buffer size of the client request header. This can be set according to your system paging size. Generally, the header size of a request will not exceed 1k, but since the general system paging is larger than 1k, so the paging size is set here. The paging size can be determined using the command getconf PAGESIZE obtained.


large_client_header_buffers 8 128k;

Customer request header buffer size
nginx
default will Use client_header_buffer_sizethisbuffer to read the header value if

headertoo big, it Will use large_client_header_buffers to read if the setting is too smallHTTPheader/Cookie too big Will report 400 Errornginx 400 bad request
If the request exceeds buffer, it will be reportedHTTP 414Error(URI Too Long)
nginx
Accept the longest HTTP The header size must be larger than one of the buffer is large, otherwise it will report 400

HTTP

error (Bad Request).

open_file_cache max 102400

Use fields

: http, server, location This command specifies whether caching is enabled, If enabled, will record the following information in the file: ·Opened file descriptor,size information and modification time. ·Existing directory information. ·Error message during file search --No such file,cannot be read correctly , Referenceopen_file_cache_errorsDirective options:·max -Specify the maximum number of caches
,
if the cache overflows ,longest Used files(LRU) will be removedExample: open_file_cache max=1000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on;

open_file_cache_errors
Syntax:open_file_cache_errors on | off default :open_file_cache_errors off Use fields : http, server, location This directive specifies whether searching for a file is logged cacheerror.

open_cache_min_uses

Syntax:open_file_cache_min_uses number Default value: open_file_cache_min_uses 1 Use fields : http, server, location This directive specifies the minimum value that can be used within a certain time range in the open_file_cache directive invalid parameters. The number of files,such as If a larger value is used , the file descriptor is always open in cache.
open_file_cache_ valid

Syntax:open_file_cache_valid time Default value:open_file_cache_valid 60 Use fields : http, server, location This directive specifies when to check for valid information about cached items in open_file_cache.


client_max_body_size 300m;

Set by

nginxUpload file size


sendfile on;

The

#sendfile command specifies whether nginx should be called sendfile function (zero copy method) to output files, for normal applications, must be set to on. If the disk is used for heavy load applications such as downloadingIO, it can be set to off to balance the disk and networkIOprocessing speed and reduce the system uptime.

tcp_nopush on;

This option allows or disallows the use of

socke ’s TCP_CORK option, this Option only used when using sendfile


proxy_connect_timeout 90;

#Backend Server Connection timeout _ Initiate handshake and wait for response timeout

proxy_read_timeout 180;

#Connect After success_waiting for the backend server response time_In fact, it has entered the backend queue waiting for processing (it can also be said to be the time it takes for the backend server to process the request)

proxy_send_timeout 180;

#Backend server data return time_ That is, the backend server must transmit the information within the specified time. Complete all the data

proxy_buffer_size 256k;

# Set the buffer size for the first part of the response read from the proxy server. Normally this part of the response contains a small response header. By default, the size of this value is The size of a buffer specified in the directive , but it can be set smaller

proxy_buffers 4 256k;

# Set the number and size of buffers used to read responses (from the proxied server). The default is also the paging size, which may be 4k or 8k depending on the operating system

proxy_busy_buffers_size 256k;


proxy_temp_file_write_size 256k;

#Set the size of data when writing proxy_temp_path to prevent a worker process from blocking for too long when transferring files

proxy_temp_path /data0 /proxy_temp_dir;

#proxy_temp_path and proxy_cache_pathThe paths specified must be in the same partition proxy_cache_path /data0/proxy_cache_dir levels=1:2 keys_zone =cache_one:200m inactive=1d max_size=30g;#
Set the memory cache space size to 200MB, 1 Contents that have not been accessed for days are automatically cleared from the hard drive The cache space size is 30GB.

keepalive_timeout 120;

keepalivetimeout.

tcp_nodelay on;

client_body_buffer_size 512k;
If you set it to a larger value, such as 256k 🎙 The pictures of are all normal. If this directive is commented out, the default client_body_buffer_size setting is used, which is twice the operating system page size, 8k or 16k , question Just appeared. Whether you use firefox4.0 or IE8.0, submit a larger one, 200k The pictures around will be returned 500 Internal Server ErrorErrorproxy_intercept_errors on; Means to make nginx block

HTTP The response code is

400 or higher. img_relay

{

server 127.0 .0.1:8027;server 127.0.0.1:8028;


server 127.0.0.1:8029;

hash $request_uri;

}

nginx’s

upstream currently supports

4 distribution methods

1, polling (default)

Each request is assigned to different backend servers one by one in chronological order. If the backend server down, it can be automatically eliminated.

2, weight
Specify polling probability, weight and access ratio Proportional to the back-end server for uneven performance Case. For example:
upstream bakend {
server 192.168.0.14 weight=10;
server 192.168.0.15 weight=10;
}

2 , ip_hash
Each request is assigned according to the ip result, so that each visitor has fixed access to a backend server, which can be solved sessionproblem. For example: upstream bakend {ip_hash;
server 192.168.0.14:88;
server 192.168.0.15:80;
}

3

, fair (Third Party) Requests are allocated according to the response time of the backend server, and those with short response times are allocated first. upstream backend {server server1;
server server2;
fair;
}

4

url_hash (Third party)

Access to url

results to distribute requests to each url directed to For the same backend server, it is more effective when the backend server is cached. Example: Add hash

statement, server

to upstream cannot be written in the statement weight For other parameters, hash_method is using the hash algorithm upstream backend {server squid1:3128;

server squid2: 3128;hash $request_uri;

hash_method crc32;
}



tips:

upstream bakend{# Define the

Ip

and device status of the load balancing device ip_hash;server 127.0.0.1:9090 down;server 127.0.0.1:8080 weight=2 ;server 127.0.0.1:6060;
server 127.0.0.1:7070 backup;
}
Added



proxy_pass http://bakend /;
The status of each device is set to

: 1.down

means the previous server
is temporarily Not participating in the load
2.weightThe default is 1.weight
The larger, the greater the weight of the load.
3.max_fails: The number of allowed request failures defaults to 1.
When the maximum number is exceeded,
proxy_next_upstream module-defined error is returned 4. fail_timeout: The pause time after max_fails failures. 5.backup
: For all other non-
backup
machines
down or when they are busy, please request backup machine. So this machine will have the least pressure.
nginx

supports setting up multiple groups of load balancing at the same time for use by unused

servers.

client_body_in_file_only is set to On can speakclient The data from post is recorded into a file for debug
client_body_temp_path
Set the directory of the recording file. You can set up to 3layer directory

location matches URL. can redirect or create a new proxy Load balancing


server

#Configure virtual machine

{

listen 80;

#Configure listening port

server_name image.***.com;

#Configure access domain name

location ~* .(mp3|exe)$ {

# for "mp3 or exe" Addresses ending in are used for load balancing

proxy_pass http://img_relay$request_uri;

# Set the port or socket to be proxied, and URL

proxy_set_header Host $host;

proxy_set_header

# The purpose of the above three lines is to transmit the user information received by the proxy server to the real server

}

location /face {

if ($http_user_agent ~* "xnp") {

rewrite ^(.*)$ http://211.151.188.190:8080/face.jpg redirect;

}

proxy_pass http:// img_relay$request_uri;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr ;

proxy_set_header error_page 404 502 = @fetch;

}

location @fetch {

access_log /data/logs/face. log log404;

#

Set the access log of this server

rewrite ^(.* )$ http://211.151.188.190:8080/face.jpg redirect;

}

location /image {


if ($http_user_agent ~* "xnp") {

rewrite ^(.*)$ http://211.151.188.190:8080/face.jpg redirect;

}

proxy_pass http://img_relay$request_uri;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

error_page 404 502 = @fetch;

}

location @fetch {

access_log /data/logs/image.log log404;

rewrite ^(.*)$ http://211.151.188.190:8080/face.jpg redirect;

}

}

server

{

listen 80;

server_name *.***.com *.***.cn;

location ~* .(mp3|exe)$ {

proxy_pass http://img_relay$request_uri;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

}

location / {

if ($http_user_agent ~* "xnp") {

rewrite ^(.*)$ http://i1.***img.com/help/noimg.gif redirect;

}

proxy_pass http://img_relay$request_uri;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

#error_page 404 http://i1.***img.com/help/noimg.gif;

error_page 404 502 = @fetch;

}

location @fetch {

access_log /data/logs/baijiaqi.log log404;

rewrite ^(.*)$ http://i1.***img.com/help/noimg.gif redirect;

}

#access_log off;

}


server

{

listen 80;

server_name *.***img.com;


location ~* .(mp3|exe)$ {

proxy_pass http://img_relay$request_uri;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

}


location / {

if ($http_user_agent ~* "xnp") {

rewrite ^(.*)$ http://i1.***img.com/help/noimg.gif;

}

proxy_pass http://img_relay$request_uri;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

#error_page 404 http://i1.***img.com/help/noimg.gif;

error_page 404 = @fetch;

}

#access_log off;

location @fetch {

access_log /data/logs/baijiaqi.log log404;

rewrite ^(.*)$ http://i1.***img.com/help/noimg.gif redirect;

}

}

server

{

listen 8080;

server_name ngx-ha.***img.com;

location / {

stub_status on;

access_log off;

}

}

server {

listen 80;

server_name imgsrc1.***.net;

root html;

}

server {

listen 80;

server_name ***.com w.***.com;

# access_log /usr/local/nginx/logs/access_log main;

location / {

rewrite ^(.*)$ http://www.***.com/ ;

}

}

server {

listen 80;

server_name *******.com w.*******.com;

# access_log /usr/local/nginx/logs/access_log main;

location / {

rewrite ^(.*)$ http://www.*******.com/;

}

}

server {

listen 80;

server_name ******.com;

# access_log /usr/local/nginx/logs/access_log main;

location / {

rewrite ^(.*)$ http://www.******.com/;

}

}

location /NginxStatus {
stub_status on;
access_log on;
auth_basic "NginxStatus";
auth_basic_user_file conf/htpasswd;
}

#设定查看Nginx状态的地址


location ~ /.ht {
deny all;
}

#禁止访问.htxxx文件

}

注释:变量

Ngx_http_core_module模块支持内置变量,他们的名字和apache的内置变量是一致的。

首先是说明客户请求title中的行,例如$http_user_agent,$http_cookie等等。

此外还有其它的一些变量

$args此变量与请求行中的参数相等

$content_length等于请求行的“Content_Length”的值。

$content_type等同与请求头部的”Content_Type”的值

$document_root等同于当前请求的root指令指定的值

$document_uri$uri一样

$host is specified with the "Host" line in the request header Value or request The name of the arriving server (without Host OK) the same

$limit_rateAllow limited connection rate

$request_method is equivalent to request of method, usually "GET" or “POST”

$remote_addrclientip

$remote_port Client port

$remote_user is equivalent to the username, represented by ngx_http_auth_basic_module Authentication

$request_filenameThe pathname of the currently requested file, by root or alias and URI requestCombined

$request_body_file

$request_uricontains parameters The complete initial URI

$query_string with $args same ~ ) All the requirements are evaluated such as

Rewrite ^(.+)$ $sheme://example.com$; _protocol The protocol equivalent to request, using "

HTTP/ or "

HTTP/

$server_addr request arrivedserver ip, generally obtained The purpose of the value of this variable is to make system calls. To avoid system calls, it is necessary to specify

ip

in the directive and use bind parameters. $server_nameThe server name the request arrived at $server_portrequest The port number of the arriving server

$uri is equivalent to in the current request URI, yes Different from the initial value, such as when redirecting internally or using index



nginx中文Wikipedia http://wiki.nginx.org/NginxChs

http://www.queryer.cn/DOC/nginxCHS/index.html

The above has introduced a detailed explanation of the Nginx configuration file, including aspects of the content. I hope it will be helpful to friends who are interested in PHP tutorials.

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn