Home  >  Article  >  Computer Tutorials  >  Let’s talk about the king of Linux network performance—XDP technology

Let’s talk about the king of Linux network performance—XDP technology

WBOY
WBOYforward
2024-03-08 14:00:221247browse

Let’s talk about the king of Linux network performance—XDP technology

Hello everyone, today we will talk about XDP technology through several pictures.

Many Linux developers may be unfamiliar with XDP technology, especially those who work in network-related development. If you are a Linux developer and do not know XDP technology, you may miss many opportunities.

I once applied XDP technology to optimize a project and successfully improved its network processing performance by 3-4 times. Some people may think that the project's original performance is poor, so there is such significant room for improvement.

I believe that under the current software architecture, even with further optimization, the performance bottleneck is not easy to eliminate. A more efficient architecture must be adopted to solve this problem from a more comprehensive perspective.

My follow-up project Magic Box will also use XDP technology. After using XDP technology, the network performance of the Magic Box is estimated to be improved by about 3 times.

1. Introduction to XDP technology

1.1 XDP technical background

With the emergence of ultra-high bandwidth network technology 10G, 40G, and 100G networks, the Linux kernel protocol stack is increasingly unable to adapt to the development of new network technologies. The Linux kernel protocol stack seems to have become a bottleneck and a waste of network performance. In order to To solve this embarrassing situation, the Linux kernel has introduced a new technology, Kernel Bypass technology. The core idea of ​​Kernel Bypass technology is that network data packets skip the kernel protocol stack and are processed directly by user programs, thus avoiding the need for kernel The overhead of the protocol stack greatly improves network performance.

XDP is a Linux-specific kernel bypass technology, corresponding to which is DPDK technology. DPDK performs well in terms of performance, but it is not fully suitable for Linux systems.

1.2 What is XDP?

XDP is a Linux kernel technology that uses the eBPF mechanism to achieve high-performance packet processing and forwarding in the kernel space.

XDP can significantly improve network performance and provide a flexible programming interface that allows users to implement various customized network functions. Compared with traditional user space packet processing, XDP can effectively reduce packet processing delay and CPU usage.

XDP technology working mode:

Native mode (high performance, requires network card support) driver mode, runs the XDP program in the network card driver, and redirects network data packets from the network card driver. This mode supports more network cards and has high performance. If If the network card supports it, try to use this mode.

Uninstall mode (highest performance, least supported network cards) uninstalls the XDP program directly to the network card. This mode supports few network cards and will not be discussed for now.

Universal mode (good performance, best Linux kernel support) XDP program runs at the entrance of the Linux kernel protocol stack, no driver support is required, and the performance is lower than the other two modes of System performance has been improved to a certain extent.

There will be a special topic on XDP technology in the future, which will not be discussed here.

2. Working principle of AF_XDP

2.1 Overall Architecture

Many students easily confuse XDP and AF_XDP technology.

  • XDP technology is a new network technology based on BPF technology.
  • AF_XDP is an application scenario of XDP technology. AF_XDP is a high-performance Linux socket.

AF_XDP needs to be created through the socket function.

socket(AF_XDP, SOCK_RAW, 0);

AF_XDP technology will involve some important knowledge points:

picture

  • AF_XDP requires the cooperation of the XDP program to complete the sending and receiving of network data packets.
  • The main job of the XDP program is to filter and redirect data packets based on the relevant information of the Ethernet frame, such as MAC address, quintuple information, etc.
  • AF_XDP processes Ethernet data frames, so the user program sends and receives Ethernet data frames.
  • User program, AF_XDP, XDP will operate a shared memory area called UMEM.
  • The reception and transmission of network data packets requires the use of 4 lock-free ring queues.

2.2 UMEM shared memory

UMEM shared memory is applied for through the setsockopt function.

setsockopt(umem->fd, SOL_XDP, XDP_UMEM_REG, &mr, sizeof(mr));

UMEM shared memory is usually 4K as a unit, each unit can store a data packet, UMEM shared memory is usually 4096 units.

Received and sent data packets are stored in the UMEM memory unit.

Both user programs and the kernel can directly operate this memory area, so when sending and receiving data packets, it is just a simple memory copy and no system call is required.

用户程序需要维护一个UMEM内存使用记录,记录每一个UMEM单元是否已被使用,每个记录都会有一个相对地址,用于定位UMEM内存单元地址。

2.2 无锁环形队列

AF_XDP socket总共有4个无锁环形队列,分别为:

  • 填充队列(FILL RING)
  • 已完成队列(COMPLETION RING)
  • 发送队列(TX RING)
  • 接收队列(RX RING)

图片

环形队列创建方式:

//创建FILL RINGsetsockopt(fd, SOL_XDP, XDP_UMEM_FILL_RING,&umem->config.fill_size, sizeof(umem->config.fill_size)); //创建COMPLETION RINGsetsockopt(fd, SOL_XDP, XDP_UMEM_COMPLETION_RING,&umem->config.comp_size, sizeof(umem->config.comp_size));//创建RX RING setsockopt(xsk->fd, SOL_XDP, XDP_RX_RING,&xsk->config.rx_size, sizeof(xsk->config.rx_size));//创建TX RINGsetsockopt(xsk->fd, SOL_XDP, XDP_TX_RING, &xsk->config.tx_size, sizeof(xsk->config.tx_size));

4个环形队列实现方式基本相同,环形队列是对数组进行封装的数据结构,环形队列由5个重要部分组成:

  • 生产者序号(producer)

    生产者序号用于指示数组当前可生产的元素位置,如果队列已满,将不能再生产。

  • 消费者序号(consumer)

    消费者序号用于指示当前可消费的元素位置,如果队列已空,将不能再消费。

  • 队列长度(len)

    队列长度即数组长度。

  • 队列掩码(mask)

    mask=len-1,生产者和消费者序号不能直接使用,需要配合掩码使用,producer,consumer和mask进行与运算,可以获取到数组的索引值。

  • 固定长度数组

数组的每一个元素记录了UMEM单元的相对地址,如果UMEM单元有发送和接收的数据包,还会记录数据包的长度。

环形队列的无锁化通过原子变量来实现,原子变量和原子操作在高性能编程中经常会用到。

2.3 AF_XDP接收数据包

 AF_XDP接收数据包需要FILL RING,RX RING两个环形队列配合工作。

第一步:XDP程序获取可用UMEM单元。

FILL RING记录了可以用来接收数据包的UMEM单元数量,用户程序根据UMEM使用记录,定期的往FILL RING生产可用UMEM单元。

 第二步:XDP填充新的接收数据包

XDP程序消费FILL RING中UMEM单元用于存放网络数据包,接收完数据包后,将UMEM单元和数据包长度重新打包,填充至RX RING队列,生产一个待接收的数据包。

 第三步:用户程序接收网络数据包

用户程序检测到RX RING有待接的收数据包,消费RX RING中数据包,将数据包信息从UMEM单元中拷贝至用户程序缓冲区,同时用户程序需要再次填充FILL RING队列推动XDP继续接收数据。

图片

2.4 AF_XDP发送数据包

AF_XDP发送数据包需要COMP RING,TX RING两个环形队列配合工作。

第一步:用户程序确保有足够的UMEM发送单元

COMP RING记录了已完成发送的数据包(UMEM单元)数量,用户程序需要回收这部分UMEM单元,确保有足够的UMEM发送单元。

第二步:用户程序发送数据包

用户程序申请一个可用的UMEM单元,将数据包拷贝至该UMEM单元,然后生产一个待发送数据包填充值TX RING。

第三步:XDP发送数据包

XDP程序检测到TX RING中有待发送数据包,从TX RING消费一个数据包进行发送,发送完成后,将UMEM单元填充至COMP RING,生产一个已完成发送数据包,用户程序将对该数据包UMEM单元进行回收。

图片

3. AF_XDP高效的秘密

AF_XDP之所以高效,主要有三大原因:

  • 内核旁路技术

内核旁路技术在处理网络数据包的时候,可以跳过Linux内核协议栈,相当于走了捷径,这样可以降低链路开销。

  • 内存映射

用户程序和内核共享UMEM内存和无锁环形队列,采用mmap技术将内存进行映射,用户操作UMEM内存不需要进行系统调用,减少了系统调用上下文切换成本。

  • 无锁环形队列

无锁环形队列采用原子变量实现,可以减少线程切换和上下文切换成本。

基于以上几点,AF_XDP必然是一个高性能的网络技术,由于目前没有一个能够测试XDP极限性能的测试环境,大家如果对AF_XDP技术感兴趣,可以自行上网搜索相关资料。

The above is the detailed content of Let’s talk about the king of Linux network performance—XDP technology. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:mryunwei.com. If there is any infringement, please contact admin@php.cn delete