Recognition: unknown
Align then Adapt: Rethinking Parameter-Efficient Transfer Learning in 4D Perception
read the original abstract
Point cloud video understanding is critical for robotics as it accurately encodes motion and scene interaction. We recognize that 4D datasets are far scarcer than 3D ones, which hampers the scalability of self-supervised 4D models. A promising alternative is to transfer 3D pre-trained models to 4D perception tasks. However, rigorous empirical analysis reveals two critical limitations that impede transfer capability: overfitting and the modality gap. To overcome these challenges, we develop a novel "Align then Adapt" (PointATA) paradigm that decomposes parameter-efficient transfer learning into two sequential stages. Optimal-transport theory is employed to quantify the distributional discrepancy between 3D and 4D datasets, enabling our proposed point align embedder to be trained in Stage 1 to alleviate the underlying modality gap. To mitigate overfitting, an efficient point-video adapter and a spatial-context encoder are integrated into the frozen 3D backbone to enhance temporal modeling capacity in Stage 2. Notably, with the above engineering-oriented designs, PointATA enables a pre-trained 3D model without temporal knowledge to reason about dynamic video content at a smaller parameter cost compared to previous work. Extensive experiments show that PointATA can match or even outperform strong full fine-tuning models, whilst enjoying the advantage of parameter efficiency, e.g. 97.21 \% accuracy on 3D action recognition, $+8.7 \%$ on 4 D action segmentation, and 84.06\% on 4D semantic segmentation.
This paper has not been read by Pith yet.
Forward citations
Cited by 5 Pith papers
-
Diffusion Masked Pretraining for Dynamic Point Cloud
DiMP applies diffusion modeling to masked pretraining of dynamic point clouds to remove positional leakage and capture motion uncertainty, yielding 11.21% and 13.65% gains on offline and online action segmentation.
-
Diffusion Masked Pretraining for Dynamic Point Cloud
DiMP uses diffusion to infer clean masked positions from visible context and to model full distributions of point displacements rather than means, delivering 11.21% and 13.65% absolute gains on offline and online acti...
-
Mantis: Mamba-native Tuning is Efficient for 3D Point Cloud Foundation Models
Mantis is the first Mamba-native PEFT framework for 3D point cloud models that injects task signals into state-space updates via State-Aware Adapters and regularizes serialization with Dual-Serialization Consistency D...
-
Mantis: Mamba-native Tuning is Efficient for 3D Point Cloud Foundation Models
Mantis is the first Mamba-native PEFT framework for 3D point cloud models, using state-aware adapters and dual-serialization distillation to match performance with only 5% trainable parameters.
-
CFMS: A Coarse-to-Fine Multimodal Synthesis Framework for Enhanced Tabular Reasoning
CFMS is a coarse-to-fine framework that uses MLLMs to create a multi-perspective knowledge tuple as a reasoning map for symbolic table operations, yielding competitive accuracy on WikiTQ and TabFact.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.