Local-Global MCMC kernels: the best of both worlds
Authors: Sergey Samsonov, Evgeny Lagutin, Marylou Gabrié, Alain Durmus, Alexey Naumov, Eric Moulines
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the efficiency of Ex2MCMC and its adaptive version on classical sampling benchmarks as well as in sampling high-dimensional distributions defined by Generative Adversarial Networks seen as Energy Based Models. We perform a numerical evaluation of Ex2MCMC and Fl Ex2MCMC for various sampling problems, including sampling GANs as energy-based models. |
| Researcher Affiliation | Academia | Sergey Samsonov1 Evgeny Lagutin1 Marylou Gabrié2 Alain Durmus3 Alexey Naumov1 Eric Moulines2 1HSE University 2Ecole Polytechnique 3ENS Paris-Saclay |
| Pseudocode | Yes | Algorithm 1: Single stage of i-SIR algorithm with independent proposals; Algorithm 2: Single stage of Ex2MCMC algorithm with independent proposals; Algorithm 3: Single stage of Fl Ex2MCMC. |
| Open Source Code | Yes | We provide the code to reproduce the experiments below at https://github.com/svsamsonov/ex2mcmc_new. |
| Open Datasets | Yes | We consider sampling from a mixture of 3 equally weighted Gaussians in dimension d = 2. ... Following [52] and [32], we consider the funnel and the banana-shape distributions. ... MNIST results. We consider a simple Jensen-Shannon GAN model trained on the MNIST dataset... Cifar-10 results. We consider two popular architectures trained on Cifar-10... |
| Dataset Splits | No | The paper describes experiments on MCMC sampling from various distributions and pre-trained GANs. It does not involve training their proposed Ex2MCMC method in a supervised learning context that would require train/validation/test splits for their method's evaluation. |
| Hardware Specification | No | The paper states: '3. If you ran experiments... (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] We provide this information in the supplement paper.' This indicates the hardware specifications are not provided in the main paper. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers in the main text. It mentions that training details (which might include software dependencies) are provided in the supplement: '3. If you ran experiments... (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] The hyperparameters are provided in the supplement paper.' |
| Experiment Setup | No | The paper states: '3. If you ran experiments... (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] The hyperparameters are provided in the supplement paper.' This indicates that comprehensive experimental setup details are not in the main paper. |