Self-Adversarially Learned Bayesian Sampling
Authors: Yang Zhao, Jianyi Zhang, Changyou Chen5893-5900
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both synthetic and real datasets verify advantages of our framework, outperforming related methods in terms of both sampling efficiency and sample quality. |
| Researcher Affiliation | Academia | Yang Zhao State University of New York at Buffalo yzhao63@buffalo.edu Jianyi Zhang Fudan University 15300180019@fudan.edu.cn Changyou Chen State University of New York at Buffalo cchangyou@gmail.com |
| Pseudocode | Yes | Algorithm 1: SAL-MC training and sampling |
| Open Source Code | No | The paper states "detailed algorithm given in the Supplementary Material (SM) on our homepage" but does not provide a concrete link to source code for the described methodology. |
| Open Datasets | Yes | Experiments on both synthetic and real datasets... for image synthesis on MNIST and Celeb A datasets... We further compare SAL-MC with A-NICE-MC on several BLR tasks... Three datasets, Heart (532-13), Australian (690-14) and German (1000-24), are used... |
| Dataset Splits | No | The paper states "The models are trained on a random 80% of the datasets and tested on the remaining 20% in each run" but does not explicitly mention a separate validation split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | As suggested by (Welling and Teh 2011), a polynomially-decayed step size ϵt = a/(t + 1)0.55 is used in SGLD for a fair comparison... The mini-batch size for training is 64; and the injected noise ξ is drawn from N(0, I) for all tasks. |