Orthogonality-Promoting Dictionary Learning via Bayesian Inference
Authors: Lei Luo, Jie Xu, Cheng Deng, Heng Huang4472-4479
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical results show that our method can learn the dictionary with an accuracy better than existing methods, especially when the number of training signals is limited. and Experiments We evaluated the performance of the proposed approach on two face data sets: the AR (Martinez 1998) and the Extended Yale B (Lee and J. Ho 2005) for face recognition, a data set for action recognition: UCF sports action (Rodriguez, Ahmed, and Shah 2008) and a data set for object categories: Caltech-101 (Lazebnik, Schmid, and Ponce 2006). |
| Researcher Affiliation | Collaboration | Lei Luo,1 Jie Xu,1,2 Cheng Deng,2 Heng Huang1,3 1Electrical and Computer Engineering, University of Pittsburgh, USA 2School of Electronic Engineering, Xidian University, Xian, Shanxi, China, 3JDDGlobal.com |
| Pseudocode | No | The EM algorithm starts from an initial guess and iteratively runs an expectation (E) step, which evaluates the posterior probabilities using currently estimated parameters, and a maximization (M) step, which re-estimates the parameters based on the probabilities calculated in the E step. |
| Open Source Code | No | The paper does not provide any information about open-source code availability, such as repository links or explicit statements of code release. |
| Open Datasets | Yes | Experiments We evaluated the performance of the proposed approach on two face data sets: the AR (Martinez 1998) and the Extended Yale B (Lee and J. Ho 2005) for face recognition, a data set for action recognition: UCF sports action (Rodriguez, Ahmed, and Shah 2008) and a data set for object categories: Caltech-101 (Lazebnik, Schmid, and Ponce 2006). |
| Dataset Splits | Yes | In the first experiment, we test the performance of our method under different number of training samples. As we know, AR face database contains 14 face images without real disguise for each person. We randomly choose 2, 4, 6, 8 or 10 face images from them as training samples. Then, three face images with sunglasses and three face images with scarf from session 1 are considered as test samples, respectively. and Following a common evaluation protocol (Jiang, Lin, and Davis 2013), we evaluate all methods via five-fold cross validation on the UCF sports action database. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU models, or cloud computing specifications used for running experiments. |
| Software Dependencies | No | According to the suggestion of (Luo et al. 2018), we set a, b, c = 10^-4. |
| Experiment Setup | Yes | According to the suggestion of (Luo et al. 2018), we set a, b, c = 10^-4. and In the first experiment, we test the performance of our method under different number of training samples. As we know, AR face database contains 14 face images without real disguise for each person. We randomly choose 2, 4, 6, 8 or 10 face images from them as training samples. |