Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models

Authors: Yiwen Tang, Ray Zhang, Zoey Guo, Xianzheng Ma, Bin Zhao, Zhigang Wang, Dong Wang, Xuelong Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments indicate that our Point-PEFT can achieve better performance than the full fine-tuning on various downstream tasks, while using only 5% of the trainable parameters, demonstrating the efficiency and effectiveness of our approach.
Researcher Affiliation Collaboration Yiwen Tang1,2*, Ray Zhang2*, Zoey Guo2* Xianzheng Ma2, Bin Zhao1,2 , Zhigang Wang2, Dong Wang2, Xuelong Li1,2 1 Northwestern Polytechnical University 2 Shanghai AI Laboratory
Pseudocode No The paper describes its methods in prose and diagrams but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code is released at https://github.com/Ivan-Tang-3D/Point-PEFT.
Open Datasets Yes Scan Object NN (Uy et al. 2019) dataset is a real-world 3D point cloud classification dataset, containing about 15,000 3D objects from 15 distinct categories. ... The Model Net40 dataset (Wu et al. 2015) comprises a total of 12,311 3D CAD models across 40 categories.
Dataset Splits Yes The initial learning rate is set as 0.0005, with a weight decay factor of 0.05. We fine-tune the models in 300 epochs, utilizing a batch size of 32. As shown in Table 1, indicates that the fine-tuning utilizes a stronger data augmentation in I2P-MAE (Zhang et al. 2023c), including random scaling, translation, and rotation. Otherwise, we only adopt random scaling and translation. Respectively for Point-BERT, Point-MAE, and Point M2AE, we set the prompting layers and prompt length (L, K) as (6, 5), (6, 10), and (15, 16). ... We focus on the hardest PB-T50-RS split...
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper mentions using the 'Adam W optimizer' but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used.
Experiment Setup Yes The initial learning rate is set as 0.0005, with a weight decay factor of 0.05. We fine-tune the models in 300 epochs, utilizing a batch size of 32. ... Respectively for Point-BERT, Point-MAE, and Point M2AE, we set the prompting layers and prompt length (L, K) as (6, 5), (6, 10), and (15, 16).