Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Sampling with Mollified Interaction Energy Descent
Authors: Lingxiao Li, qiang liu, Anna Korba, Mikhail Yurochkin, Justin Solomon
ICLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show experimentally that for unconstrained sampling problems, our algorithm performs on par with existing particle-based algorithms like SVGD, while for constrained sampling problems our method readily incorporates constrained optimization techniques to handle more flexible constraints with strong performance compared to alternatives. |
| Researcher Affiliation | Collaboration | Lingxiao Li MIT CSAIL EMAIL Qiang Liu University of Texas at Austin EMAIL Anna Korba CREST, ENSAE, IP Paris EMAIL Mikhail Yurochkin IBM Research, MIT-IBM Watson AI Lab EMAIL Justin Solomon MIT CSAIL EMAIL |
| Pseudocode | Yes | Algorithm 1: Mollified interaction energy descent (MIED) in the logarithmic domain. |
| Open Source Code | Yes | The source code can be found at https://github.com/lingxiaoli94/MIED. |
| Open Datasets | Yes | Fairness Bayesian neural networks. We train fair Bayesian neural networks to predict whether the annual income of a person is at least $50, 000 with gender as the protected attribute using the Adult Income dataset (Kohavi et al., 1996). |
| Dataset Splits | Yes | We use 80%/20% training/test split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | All methods by default use a learning rate of 0.01 with Adam optimizer (Kingma & Ba, 2014). The paper mentions the Adam optimizer but does not specify its version or the versions of any other software libraries or programming languages used. |
| Experiment Setup | Yes | All methods by default use a learning rate of 0.01 with Adam optimizer (Kingma & Ba, 2014). All methods are run with identical initialization and learning rate 0.01. Results are reported after 10^4 iterations. |