Learning Affinity via Spatial Propagation Networks
Authors: Sifei Liu, Shalini De Mello, Jinwei Gu, Guangyu Zhong, Ming-Hsuan Yang, Jan Kautz
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the HELEN face parsing and PASCAL VOC-2012 semantic segmentation tasks show that the spatial propagation network provides a general, effective and efficient solution for generating high-quality segmentation results. |
| Researcher Affiliation | Collaboration | Sifei Liu UC Merced, NVIDIA Shalini De Mello NVIDIA Jinwei Gu NVIDIA Guangyu Zhong Dalian University of Technology Ming-Hsuan Yang UC Merced, NVIDIA Jan Kautz NVIDIA |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code, nor does it explicitly state that code is available. |
| Open Datasets | Yes | The PASCAL VOC 2012 segmentation benchmark [6] involves 20 foreground object classes and one background class. The original dataset contains 1464 training, 1499 validation and 1456 testing images, with pixel-level annotations. |
| Dataset Splits | Yes | The original dataset contains 1464 training, 1499 validation and 1456 testing images, with pixel-level annotations. |
| Hardware Specification | No | The paper mentions inference times and computational settings but does not provide specific hardware details such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper states 'We implement the network with a modified version of CAFFE [12]' but does not provide specific version numbers for CAFFE or any other software dependencies. |
| Experiment Setup | Yes | We use the SGD optimizer, and set the base learning rate to 0.0001. In general, we train the networks for the HELEN and VOC segmentation tasks for about 40 and 100 epochs, respectively. We fix the size of our input patches to 128 128, use the softmax loss, and use the SGD solver for all the experiments. |