MarrNet: 3D Shape Reconstruction via 2.5D Sketches
Authors: Jiajun Wu, Yifan Wang, Tianfan Xue, Xingyuan Sun, Bill Freeman, Josh Tenenbaum
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our framework on both synthetic images of objects from Shape Net [Chang et al., 2015], and real images from the PASCAL 3D+ dataset [Xiang et al., 2014]. We demonstrate that our framework performs well on 3D shape reconstruction, both qualitatively and quantitatively. |
| Researcher Affiliation | Collaboration | Jiajun Wu* MIT CSAIL Yifan Wang* Shanghai Tech University Tianfan Xue MIT CSAIL Xingyuan Sun Shanghai Jiao Tong University William T. Freeman MIT CSAIL, Google Research Joshua B. Tenenbaum MIT CSAIL |
| Pseudocode | No | The paper describes network architectures and mathematical formulas for loss functions, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing code or a link to a code repository. |
| Open Datasets | Yes | We start with experiments on synthesized images of Shape Net chairs [Chang et al., 2015]. We use the same test set of PASCAL 3D+ with earlier works [Tulsiani et al., 2017]. The IKEA dataset [Lim et al., 2013] contains images of IKEA furniture, along with accurate 3D shape and pose annotations. |
| Dataset Splits | No | The paper states it uses Shape Net objects for pre-training and describes the training objectives, but it does not specify quantitative training, validation, or test dataset splits (e.g., percentages or exact counts for each split). |
| Hardware Specification | No | The paper vaguely mentions 'a modern GPU' for fine-tuning but does not provide specific hardware details such as GPU or CPU models, memory, or processor types used for experiments. |
| Software Dependencies | Yes | We implemented our framework in Torch7 [Collobert et al., 2011]. |
| Experiment Setup | Yes | We use SGD for optimization with a batch size of 4, a learning rate of 0.001, and a momentum of 0.9. |