Home >Technology peripherals >AI >Updated Point Transformer: more efficient, faster and more powerful!
Original title: Point Transformer V3: Simpler, Faster, Stronger
Paper link: https://arxiv.org/pdf/2312.10035.pdf
Code link: https:// github.com/Pointcept/PointTransformerV3
Author affiliation: HKU SH AI Lab MPI PKU MIT
This article is not intended to focus on the attention mechanism Seek innovation within. Instead, it focuses on leveraging the power of scale to overcome existing trade-offs between accuracy and efficiency in the context of point cloud processing. Drawing inspiration from recent advances in 3D large-scale representation learning, this paper recognizes that model performance is affected more by scale than by complexity of design. Therefore, this paper proposes Point Transformer V3 (PTv3), which prioritizes simplicity and efficiency over the accuracy of certain mechanisms that have less impact on the overall performance after scaling, such as point clouds organized in specific patterns. Efficient serialized neighborhood mapping to replace KNN's exact neighborhood search. This principle enables significant scaling, extending the receptive field from 16 to 1024 points, while remaining efficient (3x faster processing and 10x more memory efficient compared to its predecessor PTv2). PTv3 achieves state-of-the-art results on more than 20 downstream tasks covering indoor and outdoor scenarios. PTv3 takes these results to the next level with further enhancements through multi-dataset joint training.
Recent advances in 3D representation learning [85] overcome the limitations in point cloud processing by introducing collaborative training methods across multiple 3D datasets. Progress has been made on data size limits. Combined with this strategy, an efficient convolutional backbone [12] effectively bridges the accuracy gap typically associated with point cloud transformers [38, 84]. However, point cloud transformers themselves have not yet fully benefited from this scale advantage due to the efficiency gap of point cloud transformers compared to sparse convolutions. This discovery shaped the original motivation for this work: to re-weigh the design choices of point transformers from the perspective of the scaling principle. This paper believes that model performance is more significantly affected by scale than by complex design.
Therefore, this article introduces Point Transformer V3 (PTv3), which prioritizes simplicity and efficiency over the accuracy of certain mechanisms to achieve scalability. Such adjustments have negligible impact on the overall performance after scaling. Specifically, PTv3 makes the following adjustments to achieve superior efficiency and scalability:
This article considers these designs to be intuitive choices driven by scaling principles and advances in existing point cloud transformers. Importantly, this article highlights the critical importance of understanding how scalability affects backbone design, rather than detailed module design.
This principle significantly enhances scalability, overcoming the traditional trade-off between accuracy and efficiency (see Figure 1). PTv3 offers 3.3x faster inference and 10.2x lower memory usage than its predecessor. More importantly, PTv3 leverages its inherent ability to scale sensing range, extending its receptive field from 16 to 1024 points while maintaining efficiency. This scalability underpins its superior performance in real-world perception tasks, with PTv3 achieving state-of-the-art results on more than 20 downstream tasks in indoor and outdoor scenarios. PTv3 further improves these results by further increasing its data size through multi-dataset training [85]. It is hoped that the insights of this article will inspire future research in this direction.
Figure 1. Point Transformer V3 (PTv3) overview. Compared with its predecessor PTv2 [84], PTv3 in this paper shows superiority in the following aspects: 1. Stronger performance. PTv3 achieves state-of-the-art results on a variety of indoor and outdoor 3D perception tasks. 2. Wider receptive field. Benefiting from simplicity and efficiency, PTv3 expands the receptive field from 16 to 1024 points. 3. Faster. PTv3 significantly increases processing speed, making it suitable for latency-sensitive applications. 4. Reduce memory consumption. PTv3 reduces memory usage and enhances accessibility in a wider range of situations.
Figure 2. Delay tree diagram of each component of PTv2. This article benchmarks and visualizes the forward time ratio of each component of PTv2. KNN Query and RPE take up 54% of the forward time in total.
Figure 3. Point cloud serialization. This article demonstrates four serialization patterns through triplet visualization. For each triplet, the space-filling curve for serialization (left), the point cloud serialization variable sorting order within the space-filling curve (middle), and the grouped patches of the serialized point cloud for local attention are shown (right). The transformation of the four serialization modes allows the attention mechanism to capture various spatial relationships and contexts, thereby improving model accuracy and generalization ability.
Figure 4. Patch grouping. (a) Reordering of point clouds according to an order derived from a specific serialization schema. (b) Fill the point cloud sequence by borrowing points from adjacent patches to ensure that it is divisible by the specified patch size.
Figure 5. Patch interaction. (a) Standard patch grouping, with a regular, non-shifted arrangement; (b) Translational expansion, in which points are aggregated at regular intervals to produce an expansion effect; (c) Shift Patch, which uses a shifting mechanism similar to the shift window method ; (d) Shift Order, in which different serialization patterns are cyclically assigned to successive attention layers; (d) Shuffle Order, in which the sequence of serialization patterns is randomized before being input to the attention layer.
Figure 6. Overall architecture.
ArXiv. /abs/2312.10035
Original link: https://mp.weixin.qq.com/s/u_kN8bCHO96x9FfS4HQGiA
The above is the detailed content of Updated Point Transformer: more efficient, faster and more powerful!. For more information, please follow other related articles on the PHP Chinese website!