Coupled Variational Bayes via Optimization Embedding

Authors: Bo Dai, Hanjun Dai, Niao He, Weiyang Liu, Zhen Liu, Jianshu Chen, Lin Xiao, Le Song

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we justify the benefits of the proposed coupled variational Bayes in terms of the flexibility and the efficiency in sample complexity empirically. We also illustrate its generative ability. The algorithms are executed on the machine with Intel Core i7-4790K CPU and GTX 1080Ti GPUs. Additional experimental results, including the variants of CVB to discrete latent variable models and more results on real-world datasets, can be found in Appendix D.
Researcher Affiliation Collaboration Bo Dai1,2, Hanjun Dai1, Niao He3, Weiyang Liu1, Zhen Liu1, Jianshu Chen4, Lin Xiao5, Le Song1,6 1Georgia Institute of Technology, 2Google Brain, 3University of Illinois at Urbana Champaign 4Tencent AI, 5Microsoft Research, 6Ant Financial
Pseudocode Yes Algorithm 1 Coupled Variational Bayes (CVB)
Open Source Code Yes The implementation is released at https://github.com/Hanjun-Dai/cvb.
Open Datasets Yes We first justify the flexibility of the optimization embedding in CVB on the simple synthetic dataset [Mescheder et al., 2017]. To verify the sample efficiency of CVB, we compare the performance of CVB on static binarize MNIST dataset... We conduct experiments on real-world datasets, MNIST and Celeb A, for demonstrating the generative ability of the model learned by CVB.
Dataset Splits No The paper mentions using a 'held-out test set' and 'test samples' but does not provide specific train/validation/test dataset splits (e.g., percentages, sample counts, or citations to predefined splits) needed for full reproduction.
Hardware Specification Yes The algorithms are executed on the machine with Intel Core i7-4790K CPU and GTX 1080Ti GPUs.
Software Dependencies No The paper mentions that the implementation can be done in TensorFlow or PyTorch and uses existing code for baselines, but does not specify exact software dependencies with version numbers for its own experiments.
Experiment Setup Yes The number of steps T in optimization embedding is set to be 5 in this case. in each epoch, the batch size is set to be 100 while the initial learning rate is set to 0.0001.