Learning to Reconstruct Shapes from Unseen Classes

Authors: Xiuming Zhang, Zhoutong Zhang, Chengkai Zhang, Josh Tenenbaum, Bill Freeman, Jiajun Wu

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that Gen Re performs well on single-view shape reconstruction, and generalizes to diverse novel objects from categories not seen during training.
Researcher Affiliation Collaboration Xiuming Zhang MIT CSAIL Zhoutong Zhang MIT CSAIL Chengkai Zhang MIT CSAIL Joshua B. Tenenbaum MIT CSAIL William T. Freeman MIT CSAIL, Google Research Jiajun Wu MIT CSAIL
Pseudocode No The paper describes the architecture and process in text but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions 'Project page: http://genre.csail.mit.edu' but does not explicitly state that source code is provided there, nor does it give a direct link to a code repository or mention code in supplementary materials.
Open Datasets Yes We use Shape Net [Chang et al., 2015] renderings for network training and testing.
Dataset Splits No The paper mentions training and testing on specific ShapeNet classes but does not explicitly describe a validation set split (e.g., percentages, counts, or predefined splits) or how it's used for reproduction.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models, or cloud instances) used to run the experiments.
Software Dependencies No The paper mentions 'Mitsuba [Jakob, 2010], a physically-based rendering engine', but it does not provide specific version numbers for this or any other software dependencies crucial for reproducibility.
Experiment Setup No The paper describes network architectures, loss functions, and training stages, but it does not provide specific hyperparameter values such as learning rates, batch sizes, or number of epochs in the main text.