Worst-Case Discriminative Feature Selection

Authors: Shuangli Liao, Quanxue Gao, Feiping Nie, Yang Liu, Xiangdong Zhang

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the effectiveness of the proposed algorithm, we conduct classification experiments on many real data sets. In the experiment, we respectively use the original features and the score vectors of features over all class pairs to calculate the correlation coefficients, and analyze the experimental results in these two ways. Experimental results demonstrate the effectiveness of WDFS and UWDFS.
Researcher Affiliation Academia 1State Key Laboratory of Integrated Services Networks, Xidian University, Xi an, 710071, China 2School of Computer Science, OPTIMAL, Northwestern Polytechnical University, Xi an, 710072, China
Pseudocode Yes For clarity, Algorithm 1 lists the pseudo code of solving the model WDFS. For clarity, Algorithm 2 lists the pseudo code of solving the model UWDFS.
Open Source Code No The code of m RMR, TR and Fisher Score are downloaded from the ASU feature selection repository3 and the code of DFS is implemented by ourselves in Python2.7.
Open Datasets Yes COIL20, USPS, two face image datasets ORL1 and UMIST, and one biological gene expression microarray dataset, lung cancer (LUNG). (The COIL20, USPS and LUNG datasets are download from the Internet2). [Footnote 1: http://www.zjucadcg.cn/dengcai/Data/Face Data.html; Footnote 2: http://featureselection.asu.edu/datasets.php]
Dataset Splits No We randomly divide each dataset into two parts that are approximately equal, one for training and the other for testing, and we repeat each group experiment for ten times.
Hardware Specification No No specific hardware details (like GPU/CPU models or specific computational resources) used for experiments were mentioned in the paper.
Software Dependencies No The code of m RMR, TR and Fisher Score are downloaded from the ASU feature selection repository3 and the code of DFS is implemented by ourselves in Python2.7.
Experiment Setup Yes In particular, the trade-off parameters in DFS and RFS, the trade-off parameters in DFS and RFS are set α from [1e 6, 1e 4, 1e 3, 1e 2, 0.1, 1, 10, 100, 1e3, 1e4]. we use the 1NN calssifier for classification and Euclidean distance as the metric and let the number of selected features be between 10 and 100 with an interval of 10.