Discriminative Feature Grouping
Authors: Lei Han, Yu Zhang
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on synthetic and real-world datasets demonstrate that the proposed methods have good performance compared with the state-of-the-art feature grouping methods. In this section, we conduct empirical evaluation for the proposed methods by comparing with the Lasso, GFLasso, OSCAR, and the non-convex extensions of OSCAR, i.e. the nc FGS and nc TFGS methods in problems (3) and (4). |
| Researcher Affiliation | Academia | Lei Han1 and Yu Zhang1,2 1Department of Computer Science, Hong Kong Baptist University, Hong Kong 2The Institute of Research and Continuing Education, Hong Kong Baptist University (Shenzhen) |
| Pseudocode | No | The paper describes the 'Optimization Procedure' in a step-by-step manner within the text, including mathematical formulations for updating variables. However, it does not present this as a formal 'Pseudocode' block or 'Algorithm' figure. |
| Open Source Code | No | The paper does not provide any specific links or explicit statements about the availability of open-source code for the described methodology. |
| Open Datasets | Yes | We conduct experiments on the previously studied breast cancer data, which contains 8141 genes in 295 tumors (78 metastatic and 217 non-metastatic). Following (Yang et al. 2012), we use the data from some pairs of classes in the 20-newsgroups dataset to form binary classification problems. |
| Dataset Splits | Yes | 50%, 30%, and 20% of data are randomly chosen for training, validation and testing, respectively. (Breast Cancer dataset); Then 20%, 40% and 40% of samples are randomly selected for training, validation, and testing, respectively. (20-Newsgroups dataset) |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions implementing methods but does not provide specific software names with version numbers for reproducibility (e.g., 'Python 3.8, PyTorch 1.9'). |
| Experiment Setup | Yes | Hyperparameters, including the regularization parameters in all the models, τ in nc TFGS, and γ in ADFG, are tuned using an independent validation set with n samples. We use a grid search method with the resolutions for the λi s (i = 1, 2, 3) in all methods as [10^-4, 10^-3, ..., 10^2] and those for γ as [0, 0.1, ..., 1]. Moreover, the resolution for τ in the nc TFGS method is [0.05, 0.1, ..., 5], which is in line with the setting of the original work (Yang et al. 2012). |