Delta-AI: Local objectives for amortized inference in sparse graphical models

Authors: Jean-Pierre René Falet, Hae Beom Lee, Nikolay Malkin, Chen Sun, Dragos Secrieru, Dinghuai Zhang, Guillaume Lajoie, Yoshua Bengio

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 EXPERIMENTS: SYNTHETIC DATA; 5 EXPERIMENTS: VARIATIONAL EM ON REAL DATA
Researcher Affiliation Academia Mila Qu ebec AI Institute, Universit e de Montr eal Montreal, Quebec, Canada
Pseudocode Yes Algorithm 1 Δ-amortized inference (basic form)
Open Source Code Yes Code: https://github.com/GFNOrg/Delta-AI.
Open Datasets Yes latent variable modeling for MNIST images (Deng, 2012). We use the AMASS dataset (Mahmood et al., 2019)
Dataset Splits No The paper mentions held-out test data and 20% is held-out as a test set for the test splits, but it does not specify explicit percentages or sample counts for a validation dataset split, nor does it refer to a standard split with validation.
Hardware Specification No The research was enabled in part by computational resources provided by the Digital Research Alliance of Canada (https://alliancecan.ca), Mila (https://mila.quebec), and NVIDIA. This mentions NVIDIA and computational resources but lacks specific models (e.g., GPU series or CPU types) for the hardware used in experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes Each model is trained for total 200k iterations. Baseline GFNs: Batchsize is set to 1k. ... Learning rate of the parameters of the amortized sampler is set to 10 3 and that of the partition function estimator is set to 10 1. Those learning rates are step-wisely decayed by 0.1 at 40k, 80k, 120k, 160k, and 180k-th iteration. ... We use 𝜖= 0.1. ... For the training policy, we simply use the tempered off-policy with temperature set to 2.