NeurVPS: Neural Vanishing Point Scanning via Conic Convolution
Authors: Yichao Zhou, Haozhi Qi, Jingwei Huang, Yi Ma
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments on both synthetic and real-world datasets show that the proposed operator significantly improves the performance of vanishing point detection over traditional methods. |
| Researcher Affiliation | Academia | Yichao Zhou UC Berkeley zyc@berkeley.edu Haozhi Qi UC Berkeley hqi@berkeley.edu Jingwei Huang Standford University jingweih@stanford.edu Yi Ma UC Berkeley yima@eecs.berkeley.edu |
| Pseudocode | No | The paper describes the algorithm and its components in text and diagrams but does not include formal pseudocode blocks or algorithm listings. |
| Open Source Code | Yes | The code and dataset have been made publicly available at https://github.com/zhou13/neurvps. |
| Open Datasets | Yes | We conduct experiments on both synthetic [49] and real-world [50, 13] datasets. ... Natural Scene [50]. ... Scan Net [13]. ... SU3 Wireframe [49]. |
| Dataset Splits | Yes | Natural Scene [50]: We divide them into 2,000 training images and 275 test images randomly. ... Scan Net [13]: There are 266,844 training images. We randomly sample 500 images from the validation set as our test set. ... SU3 Wireframe [49]: It contains 22,500 training images and 500 validation images. |
| Hardware Specification | Yes | All experiments are conducted on two NVIDIA RTX 2080Ti GPUs |
| Software Dependencies | No | We implement the conic convolution operator in Py Torch by modifying the im2col + GEMM function according to Equation (3)... The paper mentions PyTorch but does not specify a version number or other software dependencies with their versions. |
| Experiment Setup | Yes | Input images are resized to 512 512. During training, the Adam optimizer [25] is used. Learning rate and weight decay are set to be 4 10 4 and 1 10 5, respectively. ... For synthetic data [49], we train 30 epochs and reduce the learning rate by 10 at the 24-th epoch. ... For the Natural Scene dataset, we train the model for 100 epochs and decay the learning rate at 60-th epoch. For Scan Net [13], we train the model for 3 epochs. ... We set Nd = 64 and use RSU3 = 5, RNS = 4, and RSN = 3 in the coarse-to-fine inference for the SU3 dataset, the Natural Scene dataset, and the Scan Net dataset, respectively. |