PointPatchMix: Point Cloud Mixing with Patch Scoring

Authors: Yi Wang, Jiaze Wang, Jinpeng Li, Zixu Zhao, Guangyong Chen, Anfeng Liu, Pheng Ann Heng

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate Point Patch Mix on two benchmark datasets including Model Net40 and Scan Object NN, and demonstrate significant improvements over various baselines in both synthetic and real-world datasets, as well as few-shot settings. We conduct extensive experiments on both synthetic and real-world datasets in point cloud shape classification to evaluate the effectiveness of Point Patch Mix, i.e., Model Net40 and Scan Objet NN. Ablation Studies. We conducted experiments in a total of four perspectives: mixing level, target generation, patch assignment, and the influence of β, and report the classification accuracy (%) of Transformer on Model Net40.
Researcher Affiliation Academia 1Central South University, Hunan, China 2The Chinese University of Hong Kong, Hong Kong SAR, China 3Zhejiang Lab, Zhejiang, China csu-wy@csu.edu.cn, jzwang@link.cuhk.edu.hk, afengliu@mail.csu.edu.cn
Pseudocode No The paper describes the proposed algorithm and its components (e.g., patch scoring, mixing algorithm) but does not include a formally labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Code is available at https://jiazewang.com/projects/pointpatchmix.html.
Open Datasets Yes We conduct extensive experiments on both synthetic and real-world datasets in point cloud shape classification to evaluate the effectiveness of Point Patch Mix, i.e., Model Net40 and Scan Objet NN.
Dataset Splits Yes Model Net40. It is a widely-used clean point cloud object dataset for classification tasks, comprising 12,311 samples spanning 40 object categories. We follow the standard pattern, using 9843 samples for training and 2468 samples for testing.
Hardware Specification No The paper describes training configurations such as epochs, batch size, optimizers, and learning rates, but it does not specify any details about the hardware used, such as GPU models, CPU types, or memory.
Software Dependencies No The paper mentions optimizers (Adam, AdamW) and a cosine annealing strategy, but it does not provide specific version numbers for these or any other software dependencies, libraries, or frameworks used in the experiments.
Experiment Setup Yes In the specific experimental configuration, all networks are uniformly provided 1024 points for training learning with 300 epochs and a batch size of 32. For Point Net, Point Net++, we use the Adam optimizer with an initial learning rate of 0.001 and a decay rate of 0.5 per 20 cycles. For Point-MAE, we use the Adam W optimizer with an initial learning rate of 0.001 and a weight decay of 0.05. A cosine annealing strategy is used to attenuate the learning rate.