Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance
Authors: Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Basri Ronen, Yaron Lipman
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply our multiview surface reconstruction model to real 2D images from the DTU MVS repository [15]. Our experiments were run on 15 challenging scans... We evaluated the quality of our 3D surface reconstructions using the formal surface evaluation script of the DTU dataset, which measures the standard Chamfer-L1 distance between the ground truth and the reconstruction. We also report PSNR of train image reconstructions. We compare to the following baselines: DVR [33] (for fixed cameras), Colmap [40] (for fixed and trained cameras) and Furu [9] (for fixed cameras). |
| Researcher Affiliation | Academia | Lior Yariv Yoni Kasten Dror Moran Meirav Galun Matan Atzmon Ronen Basri Yaron Lipman Weizmann Institute of Science {lior.yariv, yoni.kasten, dror.moran, meirav.galun, matan.atzmon, ronen.basri, yaron.lipman}@weizmann.ac.il |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and data are available at https://github.com/lioryariv/idr. |
| Open Datasets | Yes | We apply our multiview surface reconstruction model to real 2D images from the DTU MVS repository [15]. |
| Dataset Splits | No | The paper mentions evaluating on the DTU dataset and training on mini-batches of pixels, and also reports PSNR of train image reconstructions, but it does not specify explicit train/validation/test dataset splits (e.g., percentages or counts for images) for reproducibility. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper describes the neural network architectures (MLP sizes, layers, activations) and optimization details, but it does not specify software dependencies with version numbers (e.g., 'PyTorch 1.9' or 'TensorFlow 2.x'). |
| Experiment Setup | Yes | For the loss, equation 8, we set λ = 0.1 and ρ = 100. To approximate the indicator function with Sα(θ, τ), during training, we gradually increase α and by this constrain the shape boundaries in a coarse to fine manner: we start with α = 50 and multiply it by a factor of 2 every 250 epochs (up to a total of 5 multiplications). |