PCoTTA: Continual Test-Time Adaptation for Multi-Task Point Cloud Understanding
Authors: Jincen Jiang, Qianyu Zhou, Yuhang Li, Xinkui Zhao, Meili Wang, Lizhuang Ma, Jian Chang, Jian.J Zhang, Xuequan Lu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental comparisons lead to a new benchmark, demonstrating PCo TTA s superiority in boosting the model s transferability towards the continually changing target domain. |
| Researcher Affiliation | Academia | Jincen Jiang Bournemouth University jiangj@bournemouth.ac.uk Qianyu Zhou Shanghai Jiao Tong University zhouqianyu@sjtu.edu.cn Yuhang Li Shanghai University yuhangli@shu.edu.cn Xinkui Zhao Zhejiang University zhaoxinkui@zju.edu.cn Meili Wang Northwest A&F University wml@nwsuaf.edu.cn Lizhuang Ma Shanghai Jiao Tong University lzma@sjtu.edu.cn Jian Chang Bournemouth University jchang@bournemouth.ac.uk Jian Jun Zhang Bournemouth University jzhang@bournemouth.ac.uk Xuequan Lu La Trobe University b.lu@latrobe.edu.au |
| Pseudocode | No | The paper describes its modules and processes in textual form, but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks/figures. |
| Open Source Code | Yes | Our source code is available at: https://github.com/Jinec98/PCo TTA. |
| Open Datasets | Yes | We meticulously curate and select data from 4 distinct datasets (2 synthetic and 2 real-world datasets), containing 7 identical object categories. Subsequently, we generate corresponding ground truth based on 3 different tasks. The synthetic datasets include Model Net40 [51] and Shape Net [5]. We also consider real-world data: Scan Net [7] and Scan Object NN [42]. |
| Dataset Splits | Yes | Model Net40 consists of 3, 713 samples for training and 686 for testing, while Shape Net comprises 15, 001 training samples and 2, 145 testing samples. We also consider real-world data: Scan Net [7] and Scan Object NN [42]. Scan Net provides annotations for individual objects in real 3D scans, and we choose 5, 763 samples for training and 1, 677 for testing. Scan Object NN includes 1, 577 training samples and 392 testing samples. |
| Hardware Specification | Yes | We implement our method using Py Torch and perform experiments on two NVIDIA A40 GPUs. |
| Software Dependencies | No | The paper states 'We implement our method using Py Torch' but does not specify a version number for Py Torch or any other key software libraries. |
| Experiment Setup | Yes | Following PIC [11], we set the training batch size to 128 and utilize the Adam W optimizer [28]. The learning rate is set to 0.001, with a cosine learning scheduler and a weight decay of 0.05. All models are trained for 300 epochs during the pertaining stage, and we train the pre-trained model for 3 epochs on the source domains to initialize our prototype bank. Each point cloud is sampled to 1, 024 points and then split into 64 patches, with each patch consisting of 32 points. Within the MPM framework, the mask ratio is set to 0.7, consistent with prior studies [57, 33]. |