Bayesian Maximum Margin Principal Component Analysis
Authors: Changying Du, Shandian Zhe, Fuzhen Zhuang, Yuan Qi, Qing He, Zhongzhi Shi
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on various classification tasks show that our method outperforms a number of competitors. |
| Researcher Affiliation | Academia | 1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China 2University of Chinese Academy of Sciences, Beijing 100049, China 3Department of Computer Science, Purdue University, West Lafayette, IN 47907, USA |
| Pseudocode | No | The paper describes the variational inference algorithm and mathematical derivations but does not include a formal pseudocode or algorithm block. |
| Open Source Code | No | The paper mentions using a third-party package (LIBLINEAR) but does not provide information or a link for the open-sourcing of their own method's code. |
| Open Datasets | Yes | Some statistics of these data sets are shown in Table 1. For the TRECVID2003 data... The Yale data contains... The ORL data contains... The Yale B (the extended Yale Face Database B) data includes... For the 11 Tumors and 14 Tumors gene expression data sets... Finally, the 20 Newsgroups data contains... |
| Dataset Splits | Yes | We decide to select C from the integer set {10, 20, 30, 40} for each data set by performing L-fold cross-validation on training data, where L is the smaller one of 5 and the number of training samples per class. a random and equal split into training/testing is conducted for BM2PCA. The number of training samples per class is 2 for ORL and 14 Tumors, and 5 for 20 Newsgroups. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, or memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of the "LIBLINEAR package (Fan et al. 2008)" but does not specify a version number for it or any other software dependencies. |
| Experiment Setup | Yes | In all of our experiments, the hyper-parameters of BM2PCA are set as: ar = br = 1e-3, aτ = 1e-2, aν = 1e-1, bτ = bν = δ = 1e-5. For the regularization parameter C, we empirically found that BM2PCA works well on most of our data sets when 10 C 40. We decide to select C from the integer set {10, 20, 30, 40} for each data set by performing L-fold cross-validation on training data, where L is the smaller one of 5 and the number of training samples per class. |