Home > Article > Operation and Maintenance > How to configure Tcp load balancing in Nginx
This article uses Nginx as the proxy server for load balancing. It is just a simple application example and does not involve principles.
(The host here is limited, the 8000 port of the 42.192.22.128 host is used as the proxy server listening port, and 8181 is the service listening port)
The client accesses the proxy server by The proxy server distributes the request to the corresponding server.
Modify the Nginx configuration file to configure Tcp load balancing (the configuration file after Nginx installation is in /usr/local/nginx/conf/nginx.config
, and the Nginx executable program is in /usr /local/nginx/sbin
directory)
Add the following fields in the nginx.conf
configuration file:
stream { upstream Server { server 42.192.22.128:8181 weight=1 max_fails=3 fail_timeout=30s; server 1.13.180.100:8181 weight=1 max_fails=3 fail_timeout=30s; } server { listen 8000; proxy_pass Server; } }
The main thing here is Involving two configuration blocks upstream
and server
defines two hosts in upstream
, weight
represents the weight, two The hosts are both 1, indicating that the proxy server will equally distribute client requests to the upstream server. max_fails
is used in conjunction with fail_timeout
, which means that within the fail_timeout
time period, if If the current upstream server fails to forward more than 3 times, it is considered that the upstream server is unavailable during the current fail_timeout
time period. fail_timeout
Indicates how many times the forwarding fails within this time period before the upstream server is considered temporarily unavailable. server
specifies the proxy server listening port number 8000
, proxy_pass
specifies the name in the upstream
block Server
.
After the configuration is completed, use nginx -s reload
to make the running Nginx reread the configuration items and take effect.
The upstream server uses the "Swiss Army Knife" nc
command to simulate the TCP server and listen on the corresponding port: (The IP address here is the intranet IP of the cloud host)
Use a simple Qt applet to simulate the client:
void Widget::on_btnConnection_clicked() { m_pTcpSocket->connectToHost(ui->lineeditIp->text(), ui->lineeditPort->text().toUShort()); qDebug() << m_pTcpSocket->state(); } void Widget::on_btnSend_clicked() { qDebug() << m_pTcpSocket->state(); QByteArray byteArray; byteArray.append(ui->texteditMsg->toPlainText()); const char *pChatMsg = byteArray.data(); qDebug() << m_pTcpSocket->write(pChatMsg, byteArray.size()); }
Start two clients successively and connect to 42.192.22.128:8000
through TCP, And send the message, you can see that the message is distributed to two hosts, indicating that the client's request is indeed distributed to different servers.
The above is the detailed content of How to configure Tcp load balancing in Nginx. For more information, please follow other related articles on the PHP Chinese website!