Sampling in Constrained Domains with Orthogonal-Space Variational Gradient Descent
Authors: Ruqi Zhang, Qiang Liu, Xin Tong
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implement O-Gradient through both Langevin dynamics and Stein variational gradient descent and demonstrate its effectiveness in various experiments, including Bayesian deep neural networks. and We empirically demonstrate the sampling performance of O-Langevin and O-SVGD across different constrained ML problems. |
| Researcher Affiliation | Academia | Ruqi Zhang Department of Computer Science Purdue University ruqiz@purdue.edu Qiang Liu Department of Computer Science University of Texas at Austin lqiang@cs.texas.edu Xin T. Tong Department of Mathematics National University of Singapore mattxin@nus.edu.sg |
| Pseudocode | Yes | Algorithm 1 O-SVGD and O-Langevin. |
| Open Source Code | Yes | We released the code at https://github.com/ruqizhang/o-gradient. |
| Open Datasets | Yes | We use Adult Income dataset [13] and We apply our methods to image classification on CIFAR10 with Res Net-18. |
| Dataset Splits | No | The paper mentions "training" and "testing" but does not provide specific details on dataset splits (e.g., percentages, sample counts, or explicit reference to standard splits used) for training, validation, and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names like PyTorch 1.9 or TensorFlow 2.x). |
| Experiment Setup | No | The paper describes some high-level experimental details such as model architectures (two-layer MLP, ResNet-18) and the number of particles (n=50 for synthetic task), but it does not provide specific hyperparameters like learning rates, batch sizes, number of epochs, or optimizer settings needed for a complete experimental setup. |