Dictionary Learning with Mutually Reinforcing Group-Graph Structures
Authors: Hongteng Xu, Licheng Yu, Dixin Luo, Hongyuan Zha, Yi Xu
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on several datasets and obtain superior performance compared with the state-of-the-art methods, especially in the case of only a few labeled samples and limited dictionary size. ... The image classification experiments on a variety of datasets show the superior performances of the proposed method compared to the state-of-art methods, especially in the case of a few labeled samples and limited dictionary size. |
| Researcher Affiliation | Academia | 1School of ECE, Georgia Institute of Technology, Atlanta, GA, USA 2Department of Computer Science, University of North Carolina at Chapel Hill, NC, USA 3SEIEE, Shanghai Jiao Tong University, Shanghai, China 4Software Engineering Institute, East China Normal University, Shanghai, China 5College of Computing, Georgia Institute of Technology, Atlanta, GA, USA |
| Pseudocode | No | The paper describes the algorithm steps in textual paragraphs and mathematical equations, but it does not provide a clearly labeled "Algorithm" block or "Pseudocode" section. |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We evaluate our method on four datasets: 1) Extended Yale B (Georghiades, Kriegman, and Belhurneur 1998) ... 2) UIUC-sports (Li and Fei-Fei 2007) ... 3) Scene15 (Lazebnik, Schmid, and Ponce 2006) ... 4) Caltech101 (Fei-Fei, Fergus, and Perona 2007). |
| Dataset Splits | No | The paper describes using 'labeled samples' for training and 'unlabeled samples' for semi-supervised learning, and 'testing samples' for evaluation. While it uses an 'entropy-based label propagation' method where 'top α% into their predicted groups' are incorporated, it does not specify a separate, distinct validation dataset split for hyperparameter tuning or early stopping in the conventional sense. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions various methods and models but does not provide specific software names with version numbers for reproducibility (e.g., Python, PyTorch, TensorFlow, or specific solvers with versions). |
| Experiment Setup | Yes | The dictionary size is set to be K = 380 for all methods. ... The dictionary size is set to be K = 160 for all methods. ... The dictionary size is set to be K = 450 for all methods. ... we set dictionary size K to be 500, 800, 1000 and 1500 respectively for all methods. ... the number of neighbors for each sample is set to be p = 2 in the graph construction; the percentage α for label propagation is set to be 10; the sparsity C are set according to the datasets C = 20 for Extended Yale B, Scene15 and Caltech101, and C = 25 for UIUC-Sports; the graph weight µ is set to be 0.2 for Extended Yale B and Caltech101, and 0.5 for UIUC-Sports and Scene15. |