TPC: Transformation-Specific Smoothing for Point Cloud Models

Authors: Wenda Chu, Linyi Li, Bo Li

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several common 3D transformations show that TPC significantly outperforms state of the art. For example, our framework boosts the certified accuracy against twisting transformation along the z-axis (within 20 ) from 20.3% to 83.8%. Codes and models are available at https://github.com/Qianhewu/Point-Cloud-Smoothing. We conduct extensive experiments to evaluate TPC.
Researcher Affiliation Academia Wenda Chu 1 Linyi Li 2 Bo Li 2 1Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, P. R. China (work done during remote internship at UIUC) 2University of Illinois Urbana-Champaign (UIUC), Illinois, USA.
Pseudocode No The paper contains mathematical derivations, theorems, and definitions, but no pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Codes and models are available at https://github.com/Qianhewu/Point-Cloud-Smoothing.
Open Datasets Yes Dataset. We perform experiments on the Model Net40 dataset (Wu et al., 2015), which includes different 3D objects of 40 categories. We also conduct experiments for part segmentation tasks, for which the Shape Net dataset (Chang et al., 2015) is used for evaluation.
Dataset Splits No The paper mentions using a 'fixed random subset of the Model Net40 test dataset' for evaluation and a 'standard preprocessing pipeline'. However, it does not specify the exact percentages or counts for training, validation, and test splits, nor does it cite a standard split that includes a validation set for reproducibility.
Hardware Specification No The paper discusses experimental setup but does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python version, library names with versions) that would be needed for reproducibility.
Experiment Setup No The paper states 'We apply data augmentation training for each transformation combined with consistency regularization to train base classifiers' and 'We train a Point Net model with 64 points', but it does not provide specific hyperparameters such as learning rate, batch size, optimizer, or number of epochs, which are crucial for reproducing the experimental setup.