Home  >  Article  >  Backend Development  >  How to deploy static pages using Nginx

How to deploy static pages using Nginx

不言
不言Original
2018-06-04 10:40:124098browse

This article mainly introduces the method of using Nginx to deploy static pages. It has certain reference value. Now I share it with you. Friends in need can refer to it

Nginx introduction

Nginx is a very lightweight HTTP server written by Russians. Nginx, pronounced as "engine X", is a high-performance HTTP and reverse proxy. The server is also an IMAP/POP3/SMTP proxy server. Nginx was developed by Russian Igor Sysoev for Russia's second most visited Rambler.ru site, where it has been running for more than two and a half years. Igor Sysoev built the project under the BSD license.

As an HTTP server, Nginx has the following basic features:

  1. Processing static files, index files and automatic indexing; turning on file descriptor buffering.

  2. Cacheless reverse proxy acceleration, simple load balancing and fault tolerance.

  3. FastCGI, simple load balancing and fault tolerance.

  4. Modular structure. Including filters such as gzipping, byte ranges, chunked responses, and SSI-filter. If multiple SSIs present in a single page are processed by Fast CGI or another proxy server, this processing can run in parallel without waiting for each other.

  5. Supports SSL and TLSSNI.

The advantages of Nginx: lightweight, high performance, and strong concurrency. It is also very convenient to deploy static pages.

This high performance is due to the Nginx framework. After Nginx starts, there will be a master process and multiple worker processes. The master process is mainly used to manage worker processes, including: receiving signals from the outside world, sending signals to each worker process, monitoring the running status of the worker process, and automatically restarting a new worker process when the worker process exits (under abnormal circumstances). . Basic network events are handled in the worker process. Multiple worker processes are peer-to-peer. They compete equally for requests from clients, and each process is independent of each other. A request can only be processed in one worker process, and a worker process cannot process requests from other processes. The number of worker processes can be set. Generally, we will set it to be consistent with the number of CPU cores of the machine. This is related to the process model and event processing model of Nginx.

Why choose Nginx

##When it comes to Nginx, the first reaction may be reverse proxy and load Balanced. So what is a reverse proxy and what is load balancing?

Reverse proxy

First understand what a forward proxy is. Proxy, also known as network proxy, is a special network service. Generally speaking, it acts as a middleman between the client and the target server, receives the client's request, and then initiates the corresponding request to the target server based on the client's request. Request, obtain the specified resource from the target server and return it to the client. And the proxy server can download the resources of the target server to the local cache. If the resource the client wants to obtain is in the cache of the proxy server, the proxy server will not initiate a request to the target server, but directly return the cached resource.

In fact, proxy servers are very common. For example, some scientific Internet agents that exist because of GWF use foreign servers as proxy servers to correctly resolve domain names to achieve scientific Internet access. Proxy servers can also hide the real IP. For example, the famous Tor (Onion Router) uses multiple proxies and some encryption technologies to achieve anonymous communication.

The reverse proxy is used as a proxy on the server side, not the client. That is to say, the forward proxy is a proxy for internal network users to access the connection request of the server on the Internet. The reverse proxy uses the proxy server to accept the connection request on the Internet, and then forwards the request to the server on the internal network and transfers the request from the server to the server. The results obtained are returned to the client requesting a connection on the Internet. At this time, the proxy server appears as a server to the outside world.

Load Balancing

Reverse proxy load balancing technology dynamically forwards connection requests from the Internet to multiple servers on the internal network in a reverse proxy manner. Each server performs processing to achieve load balancing.

What a coincidence, Nginx has done it all

As an excellent proxy server, Nginx must have both reverse proxy and load balancing. If you want to learn more about this knowledge and usage, please refer to the reference materials given at the end of the article: Nginx Getting Started Guide.

Nginx installation

I am using Tencent Cloud’s server, the version is: Ubuntu Server 14.04.1 LTS 32-bit.

$ apt-get install nginx

Mac OS system refer to this article: Installing Nginx in Mac OS X

Nginx configuration

Simply configure the Nginx configuration file to enable these configurations when starting Nginx. This is also the focus of this article.

Nginx 的配置系统由一个主配置文件和其他一些辅助的配置文件构成。这些配置文件均是纯文本文件,一般地,我们只需要配置主配置文件就行了。例如在我的服务器上是在:/etc/nginx/nginx.conf

指令上下文

nginx.conf 中的配置信息,根据其逻辑上的意义,对它们进行了分类,也就是分成了多个作用域,或者称之为配置指令上下文。不同的作用域含有一个或者多个配置项。

其中每个配置项由配置指令和指令参数构成,形成一个键值对,# 后面地为注释,理解起来也非常容易。

一般配置文件的结构和通用配置如下:

user www-data;  # 运行 nginx 的所属组和所有者
worker_processes 1;  # 开启一个 nginx 工作进程,一般 CPU 几核就写几
pid /run/nginx.pid;  # pid 路径

events {
    worker_connections 768;  # 一个进程能同时处理 768 个请求
    # multi_accept on;
}

# 与提供 http 服务相关的配置参数,一般默认配置就可以,主要配置在于 http 上下文里的 server 上下文
http {
    ##
    # Basic Settings
    ##

    ... 此处省略通用默认配置

    ##
    # Logging Settings
    ##
    ... 此处省略通用默认配置

    ##
    # Gzip Settings
    ##

    ... 此处省略通用默认配置

    ##
    # nginx-naxsi config
    ##

    ... 此处省略通用默认配置

    ##
    # nginx-passenger config
    ##

    ... 此处省略通用默认配置

    ##
    # Virtual Host Configs
    ##

    ... 此处省略通用默认配置

    # 此时,在此添加 server 上下文,开始配置一个域名,一个 server 配置段一般对应一个域名
    server {
        listen 80;        # 监听本机所有 ip 上的 80 端口
        server_name _;      # 域名:www.example.com 这里 "_" 代表获取匹配所有
        root /home/filename/;  # 站点根目录

        location / {       # 可有多个 location 用于配置路由地址
            try_files index.html =404;
        }
}

# 邮箱的配置,因为用不到,所以把这个 mail 上下文给注释掉
#mail {
#    # See sample authentication script at:
#    # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#    
#    # auth_http localhost/auth.php;
#    # pop3_capabilities "TOP" "USER";
#    # imap_capabilities "IMAP4rev1" "UIDPLUS";
#   
#    server {
#        listen   localhost:110;
#        protocol  pop3;
#        proxy    on;
#    }
#
#    server {
#        listen   localhost:143;
#        protocol  imap;
#        proxy    on;
#    }
#}

这里需要注意的是 http 上下文里的 server 上下文。

server {
    listen 80;        # 监听本机所有 ip 上的 80 端口
    server_name _;      # 域名:www.example.com 这里 "_" 代表获取匹配所有
    root /home/filename/;  # 站点根目录

    location / {       # 可有多个 location 用于配置路由地址
      try_files index.html =404;
    }
}

这里的 root 字段最好写在 location 字段的外边,防止出现无法加载 css、js 的情况。因为 css、js 的加载并不是自动的,nginx 无法执行,需要额外的配置来返回资源,所以,对于静态页面的部署,这样做是最为方便的。

这里对 root 作进一步解释,例如在服务器上有 /home/zhihu/ 目录,其下有 index.html 文件和 css/ 以及 img/ , root /home/zhihu/; 就将指定服务器加载资源时是在 /home/zhihu/ 下查找。

其次, location 后的匹配分多种,其各类匹配方式优先级也各不相同。这里列举一精确匹配例子:

server {
    listen 80;        
    server_name _;      
    root /home/zhihu/;  

    location = /zhihu {
      rewrite ^/.* / break;
      try_files index.html =404;
    }
}

此时,访问 www.example.com/zhihu 就会加载 zhihu.html 出来了。由于 location 的精确匹配,只有访问 www.example.com/zhihu 这个路由时才会正确响应,而且此时要通过 rewrite 正则匹配,把 /zhihu 解析替换成原来的 / 。关于更多 location 字段用法,可以在文章最后给出的参考资料中查看。

关于使用 nginx 部署静态页面,最简单便捷的配置方法

上面说了挺多关于配置的说明,下面推荐一种个人认为最为便捷的配置方法。(特此感谢 guyskk 学长的答疑解惑)

首先创建一个目录,例如: /home/ubuntu/website 然后在这个 website 文件夹下可以放置你需要部署的静态页面文件,例如 website 下我有 google、zhihu、fenghuang 这三个文件夹,其中 server 字段配置如下:

server {
    listen 80;
    server_name _;
    root /home/ubuntu/website;
    index index.html;
}

这里每个文件夹下面的静态页面文件名都是 index.html ,我以前有个很不好的习惯,比如 zhihu 页面就喜欢命名为 zhihu.html ,但就从前端来看,这也是不符合规范的。

这样配置的话,例如当你访问 www.showzeng.cn/google/ 时,nginx 就会去 website 目录下的 google 文件夹下寻找到 index.html 并把 google 页面返回,同理,访问 www.showzeng.cn/zhihu/ 时,会寻找到 zhihu 文件夹下的 index.html 并把 zhihu 页面返回。

而在 zhihu、google 、fenghuang 文件夹的同级目录上,再添加你的域名首页 index.html 时,访问 www.example.com 时就会返回了。

这里唯一美中不足的是,访问域名中 www.showzeng.cn/zhihu 末尾会自动加上 / ,在浏览器中按 F12 调试会发现 www.showzeng.cn/zhihu 为 301 状态码,因为 index.html 是在 zhihu/ 文件夹下,所以在搜索过程中会重定向到 www.showzeng.cn/zhihu/ ,起初我是接受不了的,那一 / 看起来太难受了,但是只要一想到要一个一个 location 字段去匹配,我一下子就接受了。不知道你怎么看,反正我是接受了。

Nginx 启动运行

$ sudo nginx -s reload

使用 reload 方法不用重启服务,直接重新加载配置文件,客户端感觉不到服务异常,实现平滑切换。当然你也可以重新启动 nginx 服务。

$ sudo service nginx restart

Nginx 停止运行

$ sudo nginx -s stop

相关推荐:

nginx应用:使用nginx进行负载均衡

使用nginx搭建高可用,高并发的wcf集群

The above is the detailed content of How to deploy static pages using Nginx. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn