Bridging Explicit and Implicit Deep Generative Models via Neural Stein Estimators

Authors: Qitian Wu, Rui Gao, Hongyuan Zha

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive experiments to justify our theoretical findings and demonstrate that joint training can help two models achieve better performance.
Researcher Affiliation Academia Qitian Wu1 Rui Gao2 Hongyuan Zha1,3 1Department of Computer Science and Engineering, Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University 2University of Texas at Austin 3School of Data Science, Shenzhen Institute of Artificial Intelligence and Robotics for Society, The Chinese University of Hong Kong, Shenzhen
Pseudocode No The provided text does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The implementation codes are available at https://github.com/qitianwu/ Stein Bridging.
Open Datasets Yes Furthermore, we apply the method to MNIST and CIFAR datasets which require the model to deal with high-dimensional image data. In MNIST and CIFAR, we directly use pictures in the training sets as true samples.
Dataset Splits No The paper mentions using “training sets” and “true observed samples” for training, and discusses tuning hyperparameters, but does not explicitly provide details for a separate validation dataset split.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper states “The hyperparameters are tuned according to quantitative metrics (will be discussed later) used for different tasks. See Appendix E.3 for implementation details.”, but the specific hyperparameter values and training configurations are not provided within the main text.