Recursive Context Propagation Network for Semantic Scene Labeling
Authors: Abhishek Sharma, Oncel Tuzel, Ming-Yu Liu
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on Stanford background and SIFT Flow datasets show that the proposed method outperforms previous approaches. |
| Researcher Affiliation | Collaboration | Abhishek Sharma University of Maryland College Park, MD bhokaal@cs.umd.edu Oncel Tuzel Ming-Yu Liu Mitsubishi Electric Research Labs (MERL) Cambridge, MA {oncel,mliu}@merl.com |
| Pseudocode | No | The paper describes its methods in prose and diagrams but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using 'publicly available implementation CAFFE [10]' and a 'publicly available implementation1' for L-BFGS, but does not state that the code for their proposed method (rCPN) is open-source or available. |
| Open Datasets | Yes | We extensively tested the proposed model on two widely used datasets for semantic scene labeling Stanford background [13] and SIFT Flow [14]. |
| Dataset Splits | Yes | We used the 572 train and 143 test image split provided by [7] for reporting the results. SIFT Flow contains 2688, 256 256 color images with 33 semantic classes. We experimented with the train/test (2488/200) split provided by the authors of [15]. |
| Hardware Specification | Yes | Our fast method (Section 2.1) takes only 0.37 seconds (0.3 for super-pixel segmentation, 0.06 for feature extraction and 0.01 for r CPN and labeling) to label a 256 256 image starting from the raw RGB image on a GTX Titan GPU and 1.1 seconds on a Intel core i7 CPU. |
| Software Dependencies | No | The paper mentions using 'CAFFE [10]' and 'Limited memory BFGS [12]' with a link to an implementation, but does not specify version numbers for these or any other software dependencies used in their experiments. |
| Experiment Setup | Yes | All the training images were flipped horizontally to get twice the original images. We used dropout in the last layer with dropout ratio equal to 0.5. Standard back-propagation for CNN is used with stochastic gradient descent update scheme on mini-batches of 6 images, with weight decay (λ = 5 10 5) and momentum (µ = 0.9). ...dsem = 60 for all the experiments. |