Online Bayesian Passive-Aggressive Learning

Authors: Tianlin Shi, Jun Zhu

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our approaches significantly improve time efficiency while maintaining comparable results with the batch counterparts. and We demonstrate the efficiency and prediction accuracy of online Med LDA and Med HDP, denoted as pa Med LDA and pa Med HDP, on the 20Newsgroup (20NG) and a large Wikipedia dataset.
Researcher Affiliation Academia Institute for Interdisciplinary Information Sciences, Tsinghua University, China Dept. of Comp. Sci. & Tech., TNList Lab, State Key Lab of Intell. Tech. & Sys., Tsinghua University, China
Pseudocode Yes Algorithm 1 Online Med LDA
Open Source Code No The paper mentions a third-party tool (LIBSVM) but does not provide concrete access to the authors' own source code for the methodology described.
Open Datasets Yes We demonstrate the efficiency and prediction accuracy of online Med LDA and Med HDP, denoted as pa Med LDA and pa Med HDP, on the 20Newsgroup (20NG) and a large Wikipedia dataset. and footnote 2See http://lshtc.iit.demokritos.gr/. for the Wikipedia dataset.
Dataset Splits No The paper explicitly mentions training and test sets and their sizes but does not specify details about a distinct validation set or its split.
Hardware Specification Yes All of the experiments are done on a normal computer with single-core clock rate up to 2.4 GHz.
Software Dependencies No The paper mentions using LIBSVM but does not provide a specific version number for it or any other software dependencies.
Experiment Setup Yes For all MED topic models, we use ϵ = 164, c = 1, v = 1, the choice of which is not crucial to the models performance as shown in (Zhu et al., 2013a). and for all LDA-based topic models, we use symmetric Dirichlet priors α = 1/K 1, γ = 0.5 1; for all HDP-based topic models, we use α = 5, γ = 1, η = 0.45 1