Home >Operation and Maintenance >Nginx >How to configure TCP load balancing in Nginx
Assuming that the Kubernetes cluster has been configured, we will create a virtual machine for Nginx based on CentOS.
The following are the details of the settings in the experiment:
Nginx (CenOS8 Minimal) – 192.168.1.50
Kube Master – 192.168.1.40
Kube Worker 1 – 192.168.1.41
Step 1) Install the epel warehouse
Because the nginx software package is not in the default warehouse of the CentOS system, you need to install the epel warehouse:[root@nginxlb ~]# dnf install epel-release -y
Step 2) Install Nginx
Run the following command to install nginx:[root@nginxlb ~]# dnf install nginx -yUse the rpm command to verify the details of the Nginx package:
[root@nginxlb ~]# rpm -qi nginx
Configure the firewall to allow access to nginx's http and https services:
[root@nginxlb ~]# firewall-cmd --permanent --add-service=http[root@nginxlb ~]# firewall-cmd --permanent --add-service=https[root@nginxlb ~]# firewall-cmd –reloadUse the following command to set SELinux to permissive mode, and restart the system to make selinux shutdown take effect:
[root@nginxlb ~]# sed -i s/^SELINUX=.*$/SELINUX=permissive/ /etc/selinux/config[root@nginxlb ~]# reboot
Step 3) Get the application’s NodePort details from Kubernetes
[kadmin@k8s-master ~]$ kubectl get all -n ingress-nginx
As you can see from the above output, each worker node’s NodePort 32760 is mapped to port 80 and NodePort 32375 is mapped to port 443. We will use these node ports in the Nginx configuration file for load balancing.
Step 4) Configure Nginx for load balancing
Edit the nginx configuration file and add the following:[root@nginxlb ~]# vim /etc/nginx/nginx.confComment out the "server" part ( From lines 38 to 57):
And add the following lines:
upstream backend { server 192.168.1.41:32760; server 192.168.1.42:32760; } server { listen 80; location / { proxy_read_timeout 1800; proxy_connect_timeout 1800; proxy_send_timeout 1800; send_timeout 1800; proxy_set_header Accept-Encoding ""; proxy_set_header X-Forwarded-By $server_addr:$server_port; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://backend; } location /nginx_status { stub_status; } }
Save the configuration file and exit.
According to the above changes, all requests to port 80 of nginx will be routed to the NodePort (32760) of the Kubernetes worker nodes (192.168.1.41 and 192.168.1.42) ) port. Use the following command to enable the Nginx service:
[root@nginxlb ~]# systemctl start nginx[root@nginxlb ~]# systemctl enable nginx
Test Nginx’s TCP load balancer
To test whether nginx works as a TCP load balancer for Kubernetes Normal, please deploy a deployment based on nginx, expose the deployment port as port 80, and define the entry resource for the nginx deployment. I have used the following commands to deploy these Kubernetes objects:[kadmin@k8s-master ~]$ kubectl create deployment nginx-deployment --image=nginx deployment.apps/nginx-deployment created [kadmin@k8s-master ~]$ kubectl expose deployments nginx-deployment --name=nginx-deployment --type=NodePort --port=80 service/nginx-deployment exposedRun the following commands to get the deployments, svc and ingress details:
Update the hosts file of the local host so that nginx- lb.example.com points to the IP address of the nginx server (192.168.1.50)
[root@localhost ~]# echo "192.168.1.50 nginx-lb.example.com" >> /etc/hostsTry to access nginx-lb.example.com through the browser
The above is the detailed content of How to configure TCP load balancing in Nginx. For more information, please follow other related articles on the PHP Chinese website!