Unsupervised 3D Learning for Shape Analysis via Multiresolution Instance Discrimination

Authors: Peng-Shuai Wang, Yu-Qi Yang, Qian-Fang Zou, Zhirong Wu, Yang Liu, Xin Tong2773-2781

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the efficacy and generality of our method with a set of shape analysis tasks, including shape classification, semantic shape segmentation, as well as shape registration tasks.
Researcher Affiliation Collaboration Peng-Shuai Wang1, Yu-Qi Yang2,1, Qian-Fang Zou3,1, Zhirong Wu1, Yang Liu1, Xin Tong1 1Microsoft Research Asia 2Tsinghua University 3University of Science and Technology of China
Pseudocode Yes Algorithm 1: Network training procedure
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available, nor does it refer to supplementary materials for code.
Open Datasets Yes We trained our MID-Net on Shape Net dataset (Chang et al. 2015) that consists of 57,449 3D shapes. and Dataset. We use the Model Net40 (Wu et al. 2015), which contains 13,834 3D models across 40 categories: 9,843 models are used for training and 3,991 models for testing.
Dataset Splits No For Model Net40, the paper states '9,843 models are used for training and 3,991 models for testing.' (Section 4.2). For Part Net and Shape Net Part datasets, it refers to external papers for data split setup (Mo et al. 2019) and (Yi et al. 2017a), respectively. However, it does not explicitly provide details about a validation dataset split.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions adapting HRNet to octree-based convolutional neural networks and optimizing with SGD, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes τs is a parameter controlling the concentration level of the extracted features and is set to 0.1 empirically. and All the point feature vectors are also unit-length and the control parameter τp is set to 0.1. and λs is a momentum parameter and is set to 0.5 in our implementation. and For point-instance classifier, we use a similar update rule: vi,c = (1 λp) vi,c + λp vi,c, where... λp is a momentum parameter and is set to 0.5 too.