SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images

Authors: Chen-Hsuan Lin, Chaoyang Wang, Simon Lucey

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method outperforms the state of the art under challenging single-view supervision settings on both synthetic and real-world datasets. We evaluate on the multi-view rendered images and train all airplane and car models for 100K iterations and all chair models for 200K iterations. The results are reported in Table 1 and visualized in Fig. 4.
Researcher Affiliation Academia Chen-Hsuan Lin Chaoyang Wang Simon Lucey Carnegie Mellon University chlin@cmu.edu chaoyanw@andrew.cmu.edu slucey@cs.cmu.edu
Pseudocode No The paper describes the differentiable ray-marching algorithm and the bisection method but does not present them in a structured pseudocode block or an explicitly labeled algorithm figure.
Open Source Code No The paper includes a project website link 'https://chenhsuanlin.bitbucket.io/signed-distance-SRN/' on the first page, but this is a general project page and not an explicit statement or direct link to the source code repository for the described methodology. It does not state 'We release our code' or provide a specific GitHub/GitLab link for the implementation.
Open Datasets Yes We evaluate our method on the airplane, car, and chair categories from Shape Net v2 [3], which consists of 4045, 3533, and 6778 CAD models respectively. We demonstrate the efficacy of our method on PASCAL3D+ [45], a 3D reconstruction benchmarking dataset of real-world images with ground-truth CAD model annotations.
Dataset Splits Yes We split the dataset into training/validation/test sets following Yan et al. [46].
Hardware Specification No The paper does not provide specific hardware details such as GPU models (e.g., NVIDIA A100), CPU types, or memory specifications used for running its experiments. It discusses experimental settings but omits hardware.
Software Dependencies No The paper mentions using 'Res Net-18 [12]' as the encoder and 'Adam optimizer [19]' but does not provide specific version numbers for these components or other software libraries/frameworks (e.g., PyTorch 1.x, Python 3.x) that would be needed for replication.
Experiment Setup Yes For a fair comparison, we train all networks with the Adam optimizer [19] with a learning rate of 10 4 and batch size 16. We choose M = 5 points for LSDF and set the margin ε = 0.01 when training SDF-SRN. Unless otherwise specified, we choose the loss weights to be λRGB = 1, λSDF = 3, λeik = 0.01; we set λray to be 1 for the last marched point and 0.1 otherwise. For each training iteration, we randomly sample 1024 pixels u from each image for faster training.