Towards Understanding and Improving GFlowNet Training

Authors: Max W Shen, Emmanuel Bengio, Ehsan Hajiramezanali, Andreas Loukas, Kyunghyun Cho, Tommaso Biancalani

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments on biochemical design tasks, we demonstrate that these changes in learned flows can significantly impact sample efficiency and convergence to the target distribution, with up to 10 improvement.
Researcher Affiliation Collaboration 1Genentech, South San Francisco, USA 2Prescient Design, Genentech, South San Francisco, USA 3Recursion Pharmaceuticals, Salt Lake City, Utah 4Department of Computer Science, New York University, New York, USA.
Pseudocode No The paper describes methods in text and equations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/maxwshen/gflownet.
Open Datasets Yes SIX6 (TFBind8)... from (LA et al., 2016; Trabucco et al., 2022).
Dataset Splits No The paper describes a generative model that samples data during training and evaluation, and does not specify traditional train/validation/test dataset splits for reproduction.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions using 'Py Torch neural network initializations' but does not specify version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes We found it useful to clip gradient norms to a maximum of 10.0. We also clipped policy logit predictions to a minimum of -50.0 and a maximum of 50.0. We initialized log Zθ to 5.0... every active training round we sampled a batch of 16 x... For prioritized replay training, we focus on the top 10% ranked by reward and randomly sample among them to be 50% of the batch... We use a small neural net policy with two layers of 16 hidden units. We use an exploration epsilon of 0.10.