Coupled Dictionary Learning for Unsupervised Feature Selection
Authors: Pengfei Zhu, Qinghua Hu, Changqing Zhang, Wangmeng Zuo
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark datasets validated the effectiveness of the proposed method. In this section, experiments are conducted to verify the effectiveness of the proposed algorithm on six benchmark datasets. The classification and clustering performance are evaluated for CDL-FS and all comparison methods. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Tianjin University, Tianjin, China 2School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China {zhupengfei}@tju.edu.cn |
| Pseudocode | Yes | Algorithm 1: Algorithm of coupled dictionary learning (CDL-FS) for unsupervised feature selection |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Six diverse publicly available datasets are selected for comparison, including one face recognition dataset (i.e., warp AR10P1), one handwritten digit recognition dataset (i.e., USPS2), one object recognition dataset (i.e., COIL203), one spoken letter dataset (i.e., ISOLET4) and two microarray datasets (i.e., SMK-CAN-1875and Prostate-GE6). The statistics of the six datasets are shown in Table 1. (Footnotes 1-6 provide the URLs to the datasets). |
| Dataset Splits | No | The paper describes dataset usage and evaluation but does not specify explicit training/validation/test dataset splits or methodologies for data partitioning. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or specific computing environments) used to run its experiments. |
| Software Dependencies | No | The paper describes the use of algorithms like K-means and K nearest neighbor classifier, but does not provide specific version numbers for any software dependencies or libraries used in the implementation. |
| Experiment Setup | Yes | For the proposed method, there are two parameters in Eq. (7), i.e., μ and τ. In the experiment, μ is fixed to 1 and τ is tuned by the grid-search strategy from {10 9, 10 6, 10 3, 10 1, 100, 101, 103, 106, 109}. Additionally, the number of atoms in the synthesis dictionary is fixed as half the number of samples. |