Deep Point Cloud Reconstruction

Authors: Jaesung Choe, ByeongIn Joung, Francois Rameau, Jaesik Park, In So Kweon

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our network achieves state-of-the-art performance among the recent studies in the Scan Net, ICL-NUIM, and Shape Net Part datasets.
Researcher Affiliation Academia KAIST1, POSTECH2
Pseudocode No The paper does not include a clearly labeled pseudocode or algorithm block.
Open Source Code No The paper does not explicitly state that open-source code for the described methodology is provided, nor does it include a link to a code repository.
Open Datasets Yes Our point reconstruction network has been solely trained on the Shape Net-Part (Yi et al., 2016) dataset but tested on other real and synthetic datasets such as Scan Net (Dai et al., 2017) and ICL-NUIM (Handa et al., 2014).
Dataset Splits Yes To train and validate the networks, we carefully follow the official train/val/test split provided by (Yi et al., 2016).
Hardware Specification No The paper does not explicitly describe the specific hardware used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library or solver names with version numbers).
Experiment Setup Yes We first train our voxel generation network for 10 epochs using the Adam optimizer (Kingma & Ba, 2014) with initial learning of 1e-3 and a batch size of 4. We decrease the learning rate by half for every 2 epochs. [...] For the sake of fairness, we adjust the unit voxel length as lvox=0.0200 for training the networks. [...] We follow a data augmentation scheme proposed by the previous point upsampling study (Li et al., 2021), such as random noise addition and random re-scaling. Additionally, we include random outliers that constitute less that 5 percent of input points.