Discriminative Semi-Supervised Dictionary Learning with Entropy Regularization for Pattern Classification
Authors: Meng Yang, Lin Chen
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on face recognition, digit recognition and texture classification show the effectiveness of the proposed method. |
| Researcher Affiliation | Academia | Meng Yang, Lin Chen 1, 2 1College of Computer Science & Software Engineering, Shenzhen University, Shenzhen, China 2School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China |
| Pseudocode | No | The paper describes methods using equations and textual steps but does not include a formally structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement or link confirming the availability of its own open-source code for the described methodology. |
| Open Datasets | Yes | We evaluate our approach on two face databases: Extended Yale B database (Lee, Jeffrey and David 2005), and LFW face database (Wolf, Hassner and Taigman 2009), two handwritten digit datasets: MNIST (Le Cun et al. 1998) and USPS(Hull 1994) and an object category database: Texture(Lazebnik, Schimid and Ponce 2005) for the tasks of face recognition, digit recognition and texture classification, respectively. |
| Dataset Splits | No | The paper specifies training and testing splits, and how training data is further divided into labeled and unlabeled subsets, but does not explicitly mention a separate validation set for hyperparameter tuning or early stopping. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or library versions (e.g., 'Python 3.8, PyTorch 1.9') needed to replicate the experiment. |
| Experiment Setup | Yes | In our all experiments, we set γ=0.001 and λ=0.01 based on our experimental experience. ... we set β=0.01 to suitably lower the weight of the unlabeled samples classified wrongly, while utilize the discrimination of learnt dictionary better. |