Single-Image Depth Perception in the Wild
Authors: Weifeng Chen, Zhao Fu, Dawei Yang, Jia Deng
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. |
| Researcher Affiliation | Academia | Weifeng Chen Zhao Fu Dawei Yang Jia Deng University of Michigan, Ann Arbor {wfchen,zhaofu,ydawei,jiadeng}@umich.edu |
| Pseudocode | No | The paper does not contain a pseudocode block or a clearly labeled algorithm block. |
| Open Source Code | No | The paper mentions a project website: 'Project website: http://www-personal.umich.edu/~wfchen/depth-in-the-wild.' However, it does not contain an unambiguous sentence explicitly stating that the source code for the methodology is being released or is available at this link. |
| Open Datasets | Yes | Our first contribution is a new dataset called Depth in the Wild (DIW). It consists of 495K diverse images... We demonstrate that this dataset can be used as an evaluation benchmark as well as a training resource 2. Project website: http://www-personal.umich.edu/~wfchen/depth-in-the-wild. |
| Dataset Splits | Yes | We choose the threshold τ that minimizes the maximum of the three error metrics on a validation set held out from the training set. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'deep neural networks' and 'off-the-shelf components' but does not specify any software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | No | The paper mentions some aspects of the setup like using a variant of the hourglass network and sampling 800 pairs per image, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or detailed system-level training configurations in the main text. |