Image-guided Neural Object Rendering
Authors: Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Nießner
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our approach both qualitatively and quantitatively on synthetic as well as on real data. We demonstrate the effectiveness of our algorithm using synthetic and real data, and compare to classical computer graphics and learned approaches. |
| Researcher Affiliation | Academia | 1Technical University of Munich, 2Stanford University, 3Max-Planck-Institute for Informatics, 4University of Erlangen-Nuremberg |
| Pseudocode | No | The paper describes methods in narrative text and does not include formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the described methodology. |
| Open Datasets | No | To generate photo-realistic synthetic imagery we employ the Mitsuba Renderer (Jakob, 2010) to simulate global illumination effects. For each of the N views, we raytrace a color image Ik and its corresponding depth map Dk. Our real world training data is captured using a Nikon D5300 at a resolution of 1920 1080 pixels. The paper describes generating its own synthetic and real-world datasets but does not provide explicit access information (URL, DOI, repository, or specific citation with authors/year for a dataset) for them to be publicly available. |
| Dataset Splits | Yes | The size of the training sequence is 920, the test set contains 177 images. The training corpus ranges from 1000 to 1800 frames, depending on the sequence. |
| Hardware Specification | Yes | At test time our approach runs at interactive rates, the inference time of Effects Net is 50Hz, while Composition Net runs at 10Hz on an Nvidia 1080Ti. |
| Software Dependencies | No | Per object, both networks are trained independently using the Adam optimizer (Kingma & Ba, 2014) built into Tensorflow (Abadi et al., 2015). |
| Experiment Setup | Yes | Each network is trained for 64 epochs with a learning rate of 0.001 and the default parameters β1 = 0.9, β2 = 0.999, ϵ = 1 e 8. |