NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF

Authors: Stefan Lionar, Xiangyu Xu, Min Lin, Gim Hee Lee

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that NU-MCC is able to learn a strong 3D representation, significantly advancing the state of the art in single-view 3D reconstruction. Particularly, it outperforms MCC by 9.7% in terms of the F1-score on the CO3D-v2 dataset with more than 5 faster running speed.
Researcher Affiliation Collaboration Stefan Lionar1,2 Xiangyu Xu3B Min Lin1 Gim Hee Lee2 1Sea AI Lab 2National University of Singapore 3Xi an Jiaotong University
Pseudocode No The paper does not contain any explicit pseudocode or algorithm blocks.
Open Source Code Yes Project page: https://numcc.github.io/
Open Datasets Yes We conduct extensive experiments to show the representational power and generalization capability of NU-MCC for object-level single-view reconstruction using CO3D-v2 dataset [5].
Dataset Splits Yes We use the training-validation split from MCC all categories experiment. The quantitative results on CO3D-v2 [5] validation set are summarized in Table 1.
Hardware Specification Yes Our model is trained with an effective batch size of 512 using 4 NVIDIA A100 GPUs for 100 epochs.
Software Dependencies No The paper mentions using "Adam optimizer [47]" but does not provide specific version numbers for software dependencies or libraries.
Experiment Setup Yes Our model is trained with an effective batch size of 512 using 4 NVIDIA A100 GPUs for 100 epochs. One epoch takes approximately 2 hours. We follow the optimizer and 3D data augmentation of MCC. Adam optimizer [47] with base learning rate of 10 4, cosine schedule, and linear warm-up for the first 5% of iterations are used. 3D data augmentation is performed by random scaling of s [0.8, 1.2] and rotation θ [ 180 , 180 ] along each axis.