Our site can currently handle tens of millions of PV and millions of concurrency. I have a little experience in optimizing php nginx. I will write it down to make some records.
1.TCP sockets and Unix sockets
Unix sockets provide better performance than TCP sockets (because there is less I/O data reading and writing and less context switching).
upstream backend { server unix:/var/run/fastcgi.sock; # server 127.0.0.1:8080; }
2. Disable or optimize access_log
When accessing with large traffic, larger access will cause the access log to read and write very large amounts of disk. If logging is not needed, it can be disabled.
access_log off; log_not_found off;
Or turn on buffering
access_log /var/log/nginx/access.log main buffer=32k;
3. Turn on Gzip
gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_vary on; gzip_proxied expired no-cache no-store private auth; gzip_disable "MSIE [1-6]\.";
4. Optimize output caching
fastcgi_buffers 256 16k; fastcgi_buffer_size 128k; fastcgi_connect_timeout 3s; fastcgi_send_timeout 120s; fastcgi_read_timeout 120s; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k;
5 .Optimize worker processes
nginx is multi-process rather than multi-threaded. We need to optimize the process-related configuration as follows.
First look at the number of processors on the server.
cat /proc/cpuinfo | grep processor
Then set worker_processes, and the number of processes is the number of processors obtained. worker_connections is the maximum number of connections that each process can open, which can be increased. Here is a reference.
# We have 16 cores worker_processes 16; # connections per worker events { worker_connections 4096; multi_accept on; } 记住multi_accept 必须打开。