Home  >  Article  >  Technology peripherals  >  Updated Point Transformer: more efficient, faster and more powerful!

Updated Point Transformer: more efficient, faster and more powerful!

WBOY
WBOYforward
2024-01-17 08:27:051120browse

Original title: Point Transformer V3: Simpler, Faster, Stronger

Paper link: https://arxiv.org/pdf/2312.10035.pdf

Code link: https:// github.com/Pointcept/PointTransformerV3

Author affiliation: HKU SH AI Lab MPI PKU MIT

Thesis idea:

This article is not intended to focus on the attention mechanism Seek innovation within. Instead, it focuses on leveraging the power of scale to overcome existing trade-offs between accuracy and efficiency in the context of point cloud processing. Drawing inspiration from recent advances in 3D large-scale representation learning, this paper recognizes that model performance is affected more by scale than by complexity of design. Therefore, this paper proposes Point Transformer V3 (PTv3), which prioritizes simplicity and efficiency over the accuracy of certain mechanisms that have less impact on the overall performance after scaling, such as point clouds organized in specific patterns. Efficient serialized neighborhood mapping to replace KNN's exact neighborhood search. This principle enables significant scaling, extending the receptive field from 16 to 1024 points, while remaining efficient (3x faster processing and 10x more memory efficient compared to its predecessor PTv2). PTv3 achieves state-of-the-art results on more than 20 downstream tasks covering indoor and outdoor scenarios. PTv3 takes these results to the next level with further enhancements through multi-dataset joint training.

Network Design:

Recent advances in 3D representation learning [85] overcome the limitations in point cloud processing by introducing collaborative training methods across multiple 3D datasets. Progress has been made on data size limits. Combined with this strategy, an efficient convolutional backbone [12] effectively bridges the accuracy gap typically associated with point cloud transformers [38, 84]. However, point cloud transformers themselves have not yet fully benefited from this scale advantage due to the efficiency gap of point cloud transformers compared to sparse convolutions. This discovery shaped the original motivation for this work: to re-weigh the design choices of point transformers from the perspective of the scaling principle. This paper believes that model performance is more significantly affected by scale than by complex design.

Therefore, this article introduces Point Transformer V3 (PTv3), which prioritizes simplicity and efficiency over the accuracy of certain mechanisms to achieve scalability. Such adjustments have negligible impact on the overall performance after scaling. Specifically, PTv3 makes the following adjustments to achieve superior efficiency and scalability:

  • Inspired by two recent advances [48, 77] and recognizing the advantages of structured unstructured point clouds Scalability advantage, PTv3 changes the traditional spatial proximity defined by K-Nearest Neighbors (KNN) query, accounting for 28% of the forward time. Instead, it explores the potential of serialized neighborhoods in point clouds organized according to specific patterns.
  • PTv3 adopts a simplified approach tailored for serialized point clouds, replacing more complex attention patch interaction mechanisms such as shift-window (which hinders the fusion of attention operators) and neighborhood mechanisms ( resulting in high memory consumption).
  • PTv3 eliminates the dependence on relative position encoding, which accounts for 26% of forward time, in favor of simpler front-end sparse convolutional layers.

This article considers these designs to be intuitive choices driven by scaling principles and advances in existing point cloud transformers. Importantly, this article highlights the critical importance of understanding how scalability affects backbone design, rather than detailed module design.

This principle significantly enhances scalability, overcoming the traditional trade-off between accuracy and efficiency (see Figure 1). PTv3 offers 3.3x faster inference and 10.2x lower memory usage than its predecessor. More importantly, PTv3 leverages its inherent ability to scale sensing range, extending its receptive field from 16 to 1024 points while maintaining efficiency. This scalability underpins its superior performance in real-world perception tasks, with PTv3 achieving state-of-the-art results on more than 20 downstream tasks in indoor and outdoor scenarios. PTv3 further improves these results by further increasing its data size through multi-dataset training [85]. It is hoped that the insights of this article will inspire future research in this direction.

Point Transformer V3:更简单、更快、更强!

Figure 1. Point Transformer V3 (PTv3) overview. Compared with its predecessor PTv2 [84], PTv3 in this paper shows superiority in the following aspects: 1. Stronger performance. PTv3 achieves state-of-the-art results on a variety of indoor and outdoor 3D perception tasks. 2. Wider receptive field. Benefiting from simplicity and efficiency, PTv3 expands the receptive field from 16 to 1024 points. 3. Faster. PTv3 significantly increases processing speed, making it suitable for latency-sensitive applications. 4. Reduce memory consumption. PTv3 reduces memory usage and enhances accessibility in a wider range of situations.

Point Transformer V3:更简单、更快、更强!

Figure 2. Delay tree diagram of each component of PTv2. This article benchmarks and visualizes the forward time ratio of each component of PTv2. KNN Query and RPE take up 54% of the forward time in total.

Point Transformer V3:更简单、更快、更强!

Figure 3. Point cloud serialization. This article demonstrates four serialization patterns through triplet visualization. For each triplet, the space-filling curve for serialization (left), the point cloud serialization variable sorting order within the space-filling curve (middle), and the grouped patches of the serialized point cloud for local attention are shown (right). The transformation of the four serialization modes allows the attention mechanism to capture various spatial relationships and contexts, thereby improving model accuracy and generalization ability.

Point Transformer V3:更简单、更快、更强!

Figure 4. Patch grouping. (a) Reordering of point clouds according to an order derived from a specific serialization schema. (b) Fill the point cloud sequence by borrowing points from adjacent patches to ensure that it is divisible by the specified patch size.

Point Transformer V3:更简单、更快、更强!

Figure 5. Patch interaction. (a) Standard patch grouping, with a regular, non-shifted arrangement; (b) Translational expansion, in which points are aggregated at regular intervals to produce an expansion effect; (c) Shift Patch, which uses a shifting mechanism similar to the shift window method ; (d) Shift Order, in which different serialization patterns are cyclically assigned to successive attention layers; (d) Shuffle Order, in which the sequence of serialization patterns is randomized before being input to the attention layer.

Point Transformer V3:更简单、更快、更强!

Figure 6. Overall architecture.

Experimental results:

Point Transformer V3:更简单、更快、更强!

Point Transformer V3:更简单、更快、更强!

Point Transformer V3:更简单、更快、更强!

Point Transformer V3:更简单、更快、更强!

Point Transformer V3:更简单、更快、更强!

Point Transformer V3:更简单、更快、更强!

##Summary:

This article introduces Point Transformer V3, which works towards overcoming The traditional trade-off between accuracy and efficiency in point cloud processing takes a big step forward. Guided by a novel interpretation of the scaling principle in backbone design, this paper argues that model performance is more profoundly affected by scale than by complexity of design. By prioritizing efficiency over the accuracy of smaller impact mechanisms, this paper leverages the power of scale, thereby improving performance. In short, this article can make a model more powerful by making it simpler and faster.

Citation:

Wu, X., Jiang, L., Wang, P., Liu, Z., Liu, X., Qiao, Y., Ouyang, W., He, T., & Zhao, H. (2023). Point Transformer V3: Simpler, Faster, Stronger.

ArXiv. /abs/2312.10035

Point Transformer V3:更简单、更快、更强!

Original link: https://mp.weixin.qq.com/s/u_kN8bCHO96x9FfS4HQGiA

The above is the detailed content of Updated Point Transformer: more efficient, faster and more powerful!. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete