Diversity-Promoting Bayesian Learning of Latent Variable Models

Authors: Pengtao Xie, Jun Zhu, Eric Xing

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We propose two approaches that have complementary advantages. One is to define diversity-promoting mutual angular priors... We develop two efficient approximate posterior inference algorithms... The other approach is to impose diversity-promoting regularization directly over the post-data distribution of components. These two methods are applied to the Bayesian mixture of experts model to encourage the experts to be diverse and experimental results demonstrate the effectiveness and efficiency of our methods.
Researcher Affiliation Academia Pengtao Xie PENGTAOX@CS.CMU.EDU Jun Zhu DCSZJ@TSINGHUA.EDU.CN Eric P. Xing EPXING@CS.CMU.EDU Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213 USA Dept. of Comp. Sci. & Tech., State Key Lab of Intell. Tech. & Sys., TNList, CBICR Center, Tsinghua University, China
Pseudocode No The paper does not contain any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., repository links, explicit statements of code release) for its source code.
Open Datasets Yes We used two binary-classification datasets. The first one is the Adult-9 (Platt et al., 1999) dataset... The other dataset is SUNBuilding compiled from the SUN (Xiao et al., 2010) dataset
Dataset Splits Yes Adult-9... 33K training instances and 16K testing instances. The other dataset is SUNBuilding... 70% of images are used for training and the rest for testing. All parameters were tuned using 5-fold cross validation.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes All parameters were tuned using 5-fold cross validation.