Deeply Learning the Messages in Message Passing Inference

Authors: Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our method to semantic image segmentation and achieve impressive performance, which demonstrates the effectiveness and usefulness of our CNN message learning method. We evaluate the proposed CNN message learning method for semantic image segmentation. We use the publicly available PASCAL VOC 2012 dataset [19]. Results are shown in Table 1.
Researcher Affiliation Academia Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel The University of Adelaide, Australia; and Australian Centre for Robotic Vision E-mail: {guosheng.lin,chunhua.shen,ian.reid,anton.vandenhengel}@adelaide.edu.au
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We use the publicly available PASCAL VOC 2012 dataset [19]. There are 20 object categories and one background category in the dataset. It contains 1464 images in the training set, 1449 images in the val set and 1456 images in the test set. Following the common practice in [20, 9], the training set is augmented to 10582 images by including the extra annotations provided in [21] for the VOC images.
Dataset Splits Yes It contains 1464 images in the training set, 1449 images in the val set and 1456 images in the test set. Following the common practice in [20, 9], the training set is augmented to 10582 images by including the extra annotations provided in [21] for the VOC images.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No Our system is built on Mat Conv Net [23].
Experiment Setup Yes We formulate our message estimators as multi-scale FCNNs, for which we apply a similar network configuration as in [3]. The network C(1) (see Sec. 3.2 for details) has 6 convolution blocks and C(2) has 2 fully connected layers (with K output units). Our networks are initialized using the VGG-16 model [22]. We train all layers using back-propagation. Our system is built on Mat Conv Net [23]. For the learning and prediction of our method, we only use one message passing iteration.