Interpreting Representation Quality of DNNs for 3D Point Cloud Processing

Authors: Wen Shen, Qihan Ren, Dongrui Liu, Quanshi Zhang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We propose a method to disentangle the overall model vulnerability into the sensitivity to the rotation, the translation, the scale, and local 3D structures. Besides, we also propose metrics to evaluate the spatial smoothness of encoding 3D structures, and the representation complexity of the DNN. Based on such analysis, experiments expose representation problems with classic DNNs, and explain the utility of the adversarial training. The code will be released when this paper is accepted.
Researcher Affiliation Academia Wen Shenb Qihan Rena Dongrui Liua Quanshi Zhanga a Shanghai Jiao Tong University b Tongji University
Pseudocode No The paper describes computational steps and methods but does not include a formally labeled "Pseudocode" or "Algorithm" block.
Open Source Code No The code will be released when this paper is accepted.
Open Datasets Yes All DNNs were learned based on the Model Net10 dataset [46] and the Shape Net part2 dataset [49].
Dataset Splits No The paper mentions using Model Net10 and Shape Net part datasets and reports "testing accuracy," but it does not explicitly provide the specific percentages or counts for training, validation, and test splits within the paper's text.
Hardware Specification No The paper does not provide specific details about the hardware used to run experiments, such as GPU models, CPU specifications, or memory.
Software Dependencies No The paper does not mention any specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes We followed [26] to only use 1024 points of each point cloud to train all DNNs. Each point cloud was partitioned to n = 32 regions1 for the computation of all metrics. In real implementation, we set η = 0.001, γ = 0.003, and d = 0.03. The objective function was minw Ex max T Loss(x = T(x), ytruth; w) , where w denoted the parameter of the GCNN.