Nonlinear MCMC for Bayesian Machine Learning
Authors: James Vuckovic
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply this nonlinear MCMC technique to sampling problems including a Bayesian neural network on CIFAR10. In experiments, we compare these nonlinear MCMC samplers to their linear counterparts, and find that nonlinear MCMC provides additional flexibility in designing sampling algorithms with as good, or better, performance as the linear variety. |
| Researcher Affiliation | Academia | James Vuckovic james@jamesvuckovic.com |
| Pseudocode | Yes | see Appendix A for pseudocode implementing the nonlinear MCMC algorithms we have now constructed. |
| Open Source Code | Yes | The code used in our experiments can be found at https://github.com/jamesvuc/ nonlinear-mcmc-paper. |
| Open Datasets | Yes | We apply this nonlinear MCMC technique to sampling problems including a Bayesian neural network on CIFAR10. implemented a Bayesian neural network on the CIFAR10 dataset. |
| Dataset Splits | No | The paper uses the CIFAR10 dataset and mentions Dtrain but does not explicitly provide details about training/validation/test splits, such as specific percentages, sample counts, or citations to predefined validation splits. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using JAX and Haiku and provides a link to the code, but it does not specify the version numbers for these software dependencies (e.g., Python, JAX, Haiku versions) needed to replicate the experiment. |
| Experiment Setup | Yes | In our experiments, we choose ? to be a centered, 2-dimensional Gaussian with a large variance ( = 4I2 for the circular Mo G and two-rings densities, and = 20I2 for the grid Mo G). We sample minibatches b D of size 256 to obtain the surrogate target density P( | b D) [17]. For our experiments, we pick ? / P( |Dtrain)1/ and = P( |Dtrain). |