Beyond IID: Learning to Combine Non-IID Metrics for Vision Tasks

Authors: Yinghuan Shi, Wenbin Li, Yang Gao, Longbing Cao, Dinggang Shen

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The results show that learning and integrating non-IID metrics improves the performance, compared to the IID methods. Moreover, our method achieves results comparable or better than that of the state-of-the-arts. Table 1: The error rates of all comparison methods. Table 2: The results of Accuracy (AC), Specificity (SP), Sensitivity (SE), F1 Score, and AUC of all methods.
Researcher Affiliation Academia State Key Laboratory for Novel Software Technology, Nanjing University, China Advanced Analytics Institute, University of Technology at Sydney, Australia Department of Radiology and BRIC, UNC-Chapel Hill, USA
Pseudocode Yes Algorithm 1 NIME-CK Input: Kp, φij and yij. Output: Ω, wp (p = 1, ..., P). 1: wp 1 1 P 2: Ω1 Kernel PCA Initialization (2007) 3: while not converge do 4: Ωt+1 Ωt ρΓt in Eqn.(8) 5: wp t+1 Solved in Eqn.(10) by SA 6: end while
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Natural Image Segmentation We evaluate the NIME models against various segmentation methods on the MSRC image set (Rother, Kolmogorov, and Blake 2004), which is a challenging and commonly-used image set in image segmentation and with results available from many existing methods for comparison.
Dataset Splits Yes All the parameters (e.g., λ) are experimentally determined by inner cross validation. 10-fold cross validation was taken for all the baseline and our methods. All the parameters (e.g., λ) were experimentally determined by inner cross validation.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory, or cloud instances) used to run the experiments.
Software Dependencies No The paper mentions software components like SLIC and Le Net but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For experimental settings, for each superpixel, we choose its adjacent superpixels as the spatial neighbors. For the step of superpixel over-segmentation, we employ SLIC (Achanta et al. 2012) to over-segment each image into a number of non-overlapping superpixels (typically 500-1500 superpixels). For feature representation, we extract LBP (30dimensional), Gabor (48-dimensional), color (including histogram, mean, variance, with totally 66-dimensional), and intensity (4-dimensional). In total, to represent a superpixel, we extract 148-dimensional features. All the parameters (e.g., λ) are experimentally determined by inner cross validation. In non-IID metric learning, for each current cell, we empirically choose its top 5 closest cells as the spatial neighbors.