Semi-supervised Three-dimensional Reconstruction Framework with GAN
Authors: Chong Yu
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, through the qualitative and quantitative experiments & analysis, this new method shows compelling advantages over the current state-of-the-art methods on Tanks & Temples reconstruction benchmark dataset. |
| Researcher Affiliation | Industry | Chong Yu NVIDIA Corporation dxxzdxxz@126.com, chongy@nvidia.com |
| Pseudocode | No | The paper does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | We compare SS-3D-GAN with the state-of-the-art 3D reconstruction methods in various scenes benchmark. Here are the dataset we used in quantitative experiments. Tanks and Temples dataset [Knapitsch et al., 2017] |
| Dataset Splits | No | The paper mentions using 600 scene images as weak supervision and training for 150 epochs, but does not specify clear training/validation/test dataset splits or percentages. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU or CPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Py Torch as the framework', 'Blender', and 'Open DR' but does not specify version numbers for any of these software components. |
| Experiment Setup | Yes | We set the value of parameters as follows: λ = 0.7, α = 0.25, β = 0.6, γ = 0.15, θ = 10. In this experiment, we use 600 scene images as weak supervision. The learning rate of generative and discriminative networks is 0.063. We use Py Torch as the framework, and train the SS-3D-GAN with the iterative fine-tuning process of 150 epochs. |