Image-level to Pixel-wise Labeling: From Theory to Practice

Authors: Tiezhu Sun, Wei Zhang, Zhijie Wang, Lin Ma, Zequn Jie

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on benchmark dataset demonstrate the effectiveness of the proposed method, where good image-level labels can significantly improve the pixel-wise segmentation accuracy.
Researcher Affiliation Collaboration 1School of Control Science and Engineering, Shandong University 2Tencent AI Lab, Shenzhen, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes In this section, the evaluation is conducted on the benchmark segmentation dataset PASCAL VOC 2012 [Everingham et al., 2010], which consists of 21 classes of objects (including background). Similar to [Zhao et al., 2016], we use the augmented data of PASCAL VOC 2012 with annotation of [Hariharan et al., 2011] resulting 11,295, 736, 1456 samples for training, validation and testing, respectively.
Dataset Splits Yes resulting 11,295, 736, 1456 samples for training, validation and testing, respectively.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions optimizers (SGD and Adam) and learning rates but does not provide specific version numbers for software dependencies or libraries.
Experiment Setup Yes In the training stage, SGD and Adam were employed as optimizers to train the segmentation and generative networks with the same learning rate of 10-10, respectively. The iteration number 100,000 is set for all experiments.