FeaBoost: Joint Feature and Label Refinement for Semantic Segmentation

Authors: Yulei Niu, Zhiwu Lu, Songfang Huang, Xin Gao, Ji-Rong Wen

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the MSRC and Label Me datasets demonstrate the superior performance of our Fea Boost approach in comparison with the state-of-the-art methods, especially when noisy labels are provided for semantic segmentation.
Researcher Affiliation Collaboration 1Beijing Key Laboratory of Big Data Management and Analysis Methods, School of Information, Renmin University of China, Beijing 100872, China 2IBM China Research Lab, Beijing, China 3Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, Jeddah 23955, Saudi Arabia
Pseudocode Yes Algorithm 1: Fea Boost
Open Source Code No The paper does not provide any concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets Yes The MSRC dataset consists of 591 images and 21 classes. This dataset is standardly split into into 276 training images and 256 test images. Experiments are also conducted on a more challenging dataset Label Me (also called as SIFT Flow). The Label Me dataset contains 2,688 outdoor images with 33 outdoor classes, including sky, tree, grass, building, and the standard split of 2,488 training images and 200 test images is used for this dataset.
Dataset Splits No The paper states: 'It should be noted that only image-level labels of the training set are known, and the annotations of the test images are unseen during the training and test stages. To infer the annotations of the test images, a 4,096-dimensional CNN feature is extracted from each image, and C one-vs-all SVM classifiers are trained using LIBLINEAR for prediction. Since the pixel-level labels are unknown under the weakly supervised setting, it is impossible to select the hyperparameters by cross-validation.' It only mentions training and test sets explicitly for its primary experiments, not a validation set for reproduction, and even states that cross-validation for hyperparameter selection was not possible.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions 'several one-vs-all SVM classifiers are trained using LIBLINEAR (Fan et al. 2008)' but does not provide a specific version number for LIBLINEAR or any other software dependency.
Experiment Setup Yes In this paper, the hyperparameters are uniformly set as k = 30, λ1 = 900, λ2 = 1 and λ3 = 0.15 for the two datasets.