Unsupervised Feature Selection with Structured Graph Optimization

Authors: Feiping Nie, Wei Zhu, Xuelong Li

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various benchmark data sets, including handwritten digit data, face image data and biomedical data, validate the effectiveness of the proposed approach.
Researcher Affiliation Academia 1School of Computer Science and Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi an 71072, Shaanxi, P.R. China 2Center for OPTical IMagery Analysis and Learning (OPTIMAL), Xi an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi an, 710119, Shaanxi, P. R. China
Pseudocode Yes Algorithm 1 Algorithm to solve problem (9) and Algorithm 2 Algorithm to solve problem (6)
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for the source code.
Open Datasets Yes The experiments are conducted on 8 different public available data sets, including handwritten digit (i.e. Binary Alphabet (BA) (Belhumeur, Hespanha, and Kriegman 1997), UMIST (Hou et al. 2014), USPS (Hull 1994)), human face (i.e. JAFFE (Lyons, Budynek, and Akamatsu 1999), ORL (Cai, Zhang, and He 2010)), object image (i.e. COIL20 (Nene, Nayar, and Murase 1996)), biology (i.e. SRBCT (Khan et al. 2001), Lung (Singh et al. 2002)).
Dataset Splits No The paper does not explicitly provide training/validation/test dataset splits. It mentions running K-means 5 times and reporting the optimal result, but this does not specify how the data was partitioned for model development and evaluation.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide any specific software dependencies (e.g., library or solver names with version numbers).
Experiment Setup Yes We set parameters of all approaches in same strategy to make the experiments fair enough, i.e. {10 3, 10 2, 10 1, 1, 10, 102, 103}. and The parameter m influences performance slightly and is usually set empirically around d/3 in our experiments. Therefore, we only focus on the influence of parameter γ with fixing m.