PointTFA: Training-Free Clustering Adaption for Large 3D Point Cloud Models

Authors: Jinmeng Wu, Chong Cao, Hao Zhang, Basura Fernando, Yanbin Hao, Hanyu Hong

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments We evaluate our plug-and-play Point TFA on five major Large 3D Point Cloud Models: Point Net2 (ssg) [Qi et al., 2017], Point MLP [Ma et al., 2022], Point BERT [Yu et al., 2022], Point NEXT [Qian et al., 2022], and Point BERT-ULIP-2 [Xue et al., 2023b]. We assess the performance in a Training-Free Few-Shot scenario across three downstream datasets. Details of datasets are below.
Researcher Affiliation Academia 1School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan, China 2University of Science and Technology of China 3Center for Frontier AI Research (CFAR), Agency for Science, Technology and Research (A*STAR)
Pseudocode Yes Algorithm 1: Data-Efficient RMC
Open Source Code Yes The code is available at: https://github. com/Cao Chong-git/Point TFA.
Open Datasets Yes Datasets. We tested on Model Net10 & 40 [Wu et al., 2015], [Wu et al., 2015] and Scan Object NN datasets [Uy et al., 2019].
Dataset Splits No The paper specifies 'training samples' and 'test samples' for ModelNet10 and ModelNet40, but does not explicitly mention a distinct 'validation' split or its size.
Hardware Specification No The paper does not specify any hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes The constant α determines the amount of information from the support cache that influences the final query cloud predictions. Specifically, a large α is appropriate when there s a significant distribution shift between downstream and upstream data. Conversely, a small α suggests retaining more information from the upstream data. We set α in around 20 and β in around 10.