ECO-3D: Equivariant Contrastive Learning for Pre-training on Perturbed 3D Point Cloud

Authors: Ruibin Wang, Xianghua Ying, Bowei Xing, Jinfa Yang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthesized and real-world perturbed datasets show that ECO-3D outperforms most existing pre-training strategies under various downstream tasks, achieving SOTA performance for lots of perturbations. Experiments suggest that ECO-3D outperforms most existing self-supervised pre-training frameworks on synthesized and real-world perturbed datasets, achieving SOTA performance under various downstream tasks.
Researcher Affiliation Academia Key Laboratory of Machine Perception (MOE) School of Intelligence Science and Technology, Peking University {robin wang, xhying, xingbowei, jinfayang}@pku.edu.cn
Pseudocode No The paper describes the model and processes in text and equations but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We experiment on three 3D point cloud datasets with the synthesized or real-world perturbations, including Robust Point Set(Taghanaki et al. 2020), Scan Obeject NN (Uy et al. 2019), and Shap Net C, to verify the ECO3D framework. Specifically, Robust Point Set (Taghanaki et al. 2020) is generated by performing six synthesized perturbations on the original Model Net40 (Wu et al. 2015). We select five perturbations (Noise, Rotation, Occlusion, Translate, and Missing Parts) for experiments. Scan Obeject NN (Uy et al. 2019) contains 3D scans with five real-world perturbations... In addition to the classification task, we generate Shape Net Part-C based on Shape Net Part (Yi et al. 2016) for testing our method on the part segmentation task.
Dataset Splits No The paper does not explicitly provide specific training/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification Yes The results are recorded using a single 2080Ti.
Software Dependencies No The paper does not provide specific software dependencies (e.g., library names with version numbers) needed to replicate the experiment.
Experiment Setup No Detailed implementations of these steps can be found in our Supp. Material. The pre-training setting of VAE and more visualization results are provided in our Supp. Material. This implies that specific experimental setup details are not in the main text.