Joint Semi-Supervised Feature Selection and Classification through Bayesian Approach
Authors: Bingbing Jiang, Xingyu Wu, Kui Yu, Huanhuan Chen3983-3990
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on various datasets demonstrate the effectiveness and superiority of JSFS. In this section, we conduct a series of experiments to evaluate the effectiveness of JSFS. |
| Researcher Affiliation | Academia | Bingbing Jiang,1 Xingyu Wu,1 Kui Yu,2 Huanhuan Chen1 1School of Computer Science and Technology, University of Science and Technology of China, Hefei, China. 2School of Computer and Information, Hefei University of Technology, Hefei, China. |
| Pseudocode | Yes | Algorithm 1 The proposed JSFS algorithm |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology described is publicly available. |
| Open Datasets | Yes | We use G50C dataset (Chapelle and Zien 2005)... and 8 high-dimensional datasets that are collected from different fields are used, including two text datasets: PAMAC and Basehock; four image datasets: Gisette, Mnist2, Coil202 and Yale B2; and two biological datasets: Prostate and Colon. |
| Dataset Splits | Yes | G50C is randomly partitioned by 350 samples for training and 200 for testing, in which the training set includes 20 labeled samples for each class. and we randomly sample 5, 5, 5, 5, 10, 20, 20, and 20 labeled samples each class for Coil202, Yale B2, Colon, Prostate, Mnist2, Gisette, Basehock and PCMAC, and the rest of samples in training set are unlabeled. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using LIBSVM as a baseline classifier but does not provide specific version numbers for any software dependencies required to replicate its own proposed methodology. |
| Experiment Setup | Yes | For a fair comparison, the regularization or trade-off parameters of all comparing algorithms are tuned from {10 2, 10 1, , 102} by grid search, the number of nearest neighbor k is set as five for all algorithms, and parameters µ and γ [0, 1] for JSFS. |