Online Bayesian Max-Margin Subspace Multi-View Learning

Authors: Jia He, Changying Du, Fuzhen Zhuang, Xin Yin, Qing He, Guoping Long

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various classification tasks show that our model have superior performance.
Researcher Affiliation Academia 1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China 2Laboratory of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China 3University of Chinese Academy of Sciences, Beijing 100049, China
Pseudocode No The paper describes the model mathematically but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes There are four data sets, i.e., Tervid, Washington, Cornell and News4Gv, used in our experiments. We select the web pages from Cornell and Washington as our experimental data1. These two data sets have five classes with two views. 20Newsgroups data set is widely used for classification. (Footnote 1: http://www-2.cs.cmu.edu/ webkb/)
Dataset Splits Yes the regularization parameter C is chosen from the integer set {1, 2, 3} and the subspace dimension m from the integer set {20, 30, 50} for each data set by performing 5-fold cross validation on training data. In batch learning experiments, we use the same training/testing split of the Trecvid data set as in [Chen et al., 2012]. The ratio sampled for training data is 0.5 in the three data set Trecvid, Washington and Cornell, and 0.05 in News4Gv.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes the regularization parameter C is chosen from the integer set {1, 2, 3} and the subspace dimension m from the integer set {20, 30, 50} for each data set by performing 5-fold cross validation on training data. While in our online learning, the regularization parameter C is chosen from the integer set {1, 5, 15} and the subspace dimension m from the integer set {20, 30, 50}. For the rest parameters, both our batch and online learning are set as the same, i.e., a = b = 1e-3, aφ = 1e-2, a = 1e-1, bφ = b = β = 1e-5.