search
HomeOperation and MaintenanceNginxHow to configure TCP load balancing in Nginx

How to configure TCP load balancing in Nginx

May 19, 2023 am 08:29 AM
nginxtcp

How to configure TCP load balancing in Nginx

Assuming that the Kubernetes cluster has been configured, we will create a virtual machine for Nginx based on CentOS.

The following are the details of the settings in the experiment:

  • Nginx (CenOS8 Minimal) – 192.168.1.50

  • Kube Master – 192.168.1.40

  • Kube Worker 1 – 192.168.1.41

  • ##Kube Worker 2 – 192.168.1.42

Step 1) Install the epel warehouse

Because the nginx software package is not in the default warehouse of the CentOS system, you need to install the epel warehouse:

[root@nginxlb ~]# dnf install epel-release -y

Step 2) Install Nginx

Run the following command to install nginx:

[root@nginxlb ~]# dnf install nginx -y

Use the rpm command to verify the details of the Nginx package:

[root@nginxlb ~]# rpm -qi nginx

How to configure TCP load balancing in Nginx

Configure the firewall to allow access to nginx's http and https services:

[root@nginxlb ~]# firewall-cmd --permanent --add-service=http[root@nginxlb ~]# firewall-cmd --permanent --add-service=https[root@nginxlb ~]# firewall-cmd –reload

Use the following command to set SELinux to permissive mode, and restart the system to make selinux shutdown take effect:

[root@nginxlb ~]# sed -i s/^SELINUX=.*$/SELINUX=permissive/ /etc/selinux/config[root@nginxlb ~]# reboot

Step 3) Get the application’s NodePort details from Kubernetes

[kadmin@k8s-master ~]$  kubectl get all -n ingress-nginx

How to configure TCP load balancing in Nginx

As you can see from the above output, each worker node’s NodePort 32760 is mapped to port 80 and NodePort 32375 is mapped to port 443. We will use these node ports in the Nginx configuration file for load balancing.

Step 4) Configure Nginx for load balancing

Edit the nginx configuration file and add the following:

[root@nginxlb ~]# vim /etc/nginx/nginx.conf

Comment out the "server" part ( From lines 38 to 57):

How to configure TCP load balancing in Nginx

And add the following lines:

upstream backend {
  server 192.168.1.41:32760;
  server 192.168.1.42:32760;
}

server {
  listen 80;
  location / {
      proxy_read_timeout 1800;
      proxy_connect_timeout 1800;
      proxy_send_timeout 1800;
      send_timeout 1800;
      proxy_set_header        Accept-Encoding   "";
      proxy_set_header        X-Forwarded-By    $server_addr:$server_port;
      proxy_set_header        X-Forwarded-For   $remote_addr;
      proxy_set_header        X-Forwarded-Proto $scheme;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_pass http://backend;
  }

   location /nginx_status {
       stub_status;
   }
}

How to configure TCP load balancing in Nginx

Save the configuration file and exit.

How to configure TCP load balancing in Nginx

According to the above changes, all requests to port 80 of nginx will be routed to the NodePort (32760) of the Kubernetes worker nodes (192.168.1.41 and 192.168.1.42) ) port.

Use the following command to enable the Nginx service:

[root@nginxlb ~]# systemctl start nginx[root@nginxlb ~]# systemctl enable nginx

Test Nginx’s TCP load balancer

To test whether nginx works as a TCP load balancer for Kubernetes Normal, please deploy a deployment based on nginx, expose the deployment port as port 80, and define the entry resource for the nginx deployment. I have used the following commands to deploy these Kubernetes objects:

[kadmin@k8s-master ~]$ kubectl create deployment nginx-deployment --image=nginx
deployment.apps/nginx-deployment created
[kadmin@k8s-master ~]$ kubectl expose deployments nginx-deployment  --name=nginx-deployment --type=NodePort --port=80
service/nginx-deployment exposed

Run the following commands to get the deployments, svc and ingress details:

How to configure TCP load balancing in Nginx

Update the hosts file of the local host so that nginx- lb.example.com points to the IP address of the nginx server (192.168.1.50)

[root@localhost ~]# echo "192.168.1.50  nginx-lb.example.com" >> /etc/hosts

Try to access nginx-lb.example.com through the browser

How to configure TCP load balancing in Nginx

The above is the detailed content of How to configure TCP load balancing in Nginx. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:亿速云. If there is any infringement, please contact admin@php.cn delete
NGINX Unit: Streamlining Application DeploymentNGINX Unit: Streamlining Application DeploymentMay 07, 2025 am 12:08 AM

NGINXUnit simplifies application deployment with dynamic configuration and multilingual support. 1) Dynamic configuration can be modified without restarting the server. 2) Supports multiple programming languages, such as Python, PHP, and Java. 3) Adopt asynchronous non-blocking I/O model to improve high concurrency processing performance.

NGINX's Impact: Web Servers and BeyondNGINX's Impact: Web Servers and BeyondMay 06, 2025 am 12:05 AM

NGINX initially solved the C10K problem and has now developed into an all-rounder who handles load balancing, reverse proxying and API gateways. 1) It is well-known for event-driven and non-blocking architectures and is suitable for high concurrency. 2) NGINX can be used as an HTTP and reverse proxy server, supporting IMAP/POP3. 3) Its working principle is based on event-driven and asynchronous I/O models, improving performance. 4) Basic usage includes configuring virtual hosts and load balancing, and advanced usage involves complex load balancing and caching strategies. 5) Common errors include configuration syntax errors and permission issues, and debugging skills include using nginx-t command and stub_status module. 6) Performance optimization suggestions include adjusting worker parameters, using gzip compression and

Nginx Troubleshooting: Diagnosing and Resolving Common ErrorsNginx Troubleshooting: Diagnosing and Resolving Common ErrorsMay 05, 2025 am 12:09 AM

Diagnosis and solutions for common errors of Nginx include: 1. View log files, 2. Adjust configuration files, 3. Optimize performance. By analyzing logs, adjusting timeout settings and optimizing cache and load balancing, errors such as 404, 502, 504 can be effectively resolved to improve website stability and performance.

Deploying Applications with NGINX Unit: A GuideDeploying Applications with NGINX Unit: A GuideMay 04, 2025 am 12:03 AM

NGINXUnitischosenfordeployingapplicationsduetoitsflexibility,easeofuse,andabilitytohandledynamicapplications.1)ItsupportsmultipleprogramminglanguageslikePython,PHP,Node.js,andJava.2)Itallowsdynamicreconfigurationwithoutdowntime.3)ItusesJSONforconfigu

NGINX and Web Hosting: Serving Files and Managing TrafficNGINX and Web Hosting: Serving Files and Managing TrafficMay 03, 2025 am 12:14 AM

NGINX can be used to serve files and manage traffic. 1) Configure NGINX service static files: define the listening port and file directory. 2) Implement load balancing and traffic management: Use upstream module and cache policies to optimize performance.

NGINX vs. Apache: Comparing Web Server TechnologiesNGINX vs. Apache: Comparing Web Server TechnologiesMay 02, 2025 am 12:08 AM

NGINX is suitable for handling high concurrency and static content, while Apache is suitable for dynamic content and complex URL rewrites. 1.NGINX adopts an event-driven model, suitable for high concurrency. 2. Apache uses process or thread model, which is suitable for dynamic content. 3. NGINX configuration is simple, Apache configuration is complex but more flexible.

NGINX and Apache: Deployment and ConfigurationNGINX and Apache: Deployment and ConfigurationMay 01, 2025 am 12:08 AM

NGINX and Apache each have their own advantages, and the choice depends on the specific needs. 1.NGINX is suitable for high concurrency, with simple deployment, and configuration examples include virtual hosts and reverse proxy. 2. Apache is suitable for complex configurations and is equally simple to deploy. Configuration examples include virtual hosts and URL rewrites.

NGINX Unit's Purpose: Running Web ApplicationsNGINX Unit's Purpose: Running Web ApplicationsApr 30, 2025 am 12:06 AM

The purpose of NGINXUnit is to simplify the deployment and management of web applications. Its advantages include: 1) Supports multiple programming languages, such as Python, PHP, Go, Java and Node.js; 2) Provides dynamic configuration and automatic reloading functions; 3) manages application lifecycle through a unified API; 4) Adopt an asynchronous I/O model to support high concurrency and load balancing.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft