DPSNet: End-to-end Deep Plane Sweep Stereo

Authors: Sunghoon Im, Hae-Gon Jeon, Stephen Lin, In So Kweon

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Ablation studies indicate that each of these technical contributions leads to appreciable improvements in reconstruction accuracy.
Researcher Affiliation Collaboration 1 KAIST, 2 Carnegie Mellon University, 3 Microsoft Research Asia
Pseudocode No The paper describes the pipeline and methods in text but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an explicit statement about releasing its source code or a direct link to a code repository for the described methodology.
Open Datasets Yes In the training procedure, we use image sequences, ground-truth depth maps for reference images, and the provided camera poses from public datasets, namely SUN3D, RGBD, and Scenes112.
Dataset Splits No The paper mentions using datasets for training and testing but does not provide specific details on the train/validation/test splits, such as percentages or sample counts for each subset.
Hardware Specification Yes The training is performed with a customized version of Py Torch on four NVIDIA 1080Ti GPUs, which usually takes four days.
Software Dependencies No The paper mentions using 'Py Torch' but does not specify a version number or other software dependencies with their respective versions.
Experiment Setup Yes We train our model from scratch for 1200K iterations in total. All models were trained end-to-end with the ADAM optimizer (β1 = 0.9, β2 = 0.999). We use a batch size of 16 and set the learning rate to 2e 4 for all iterations.