Local Search GFlowNets

Authors: Minsu Kim, Taeyoung Yun, Emmanuel Bengio, Dinghuai Zhang, Yoshua Bengio, Sungsoo Ahn, Jinkyoo Park

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate a remarkable performance improvement in several biochemical tasks. We present our experimental results on 6 biochemical tasks, including molecule optimization and biological sequence design.
Researcher Affiliation Collaboration Minsu Kim & Taeyoung Yun KAIST Emmanuel Bengio Recursion Dinghuai Zhang Mila, Universit e de Montr eal Yoshua Bengio Mila, Universit e de Montr eal, CIFAR Sungsoo Ahn POSTECH Jinkyoo Park KAIST, Omelet
Pseudocode Yes Algorithm 1 Local Search GFlow Net (LS-GFN)
Open Source Code Yes Source code is available: https://github.com/dbsxodud-11/ls_gfn.
Open Datasets Yes QM9. Our goal is to generate a small molecule graph...obtained via a pre-trained MXMNet (Zhang et al., 2020) proxy. s EH. Our goal is to generate binders of the s EH protein...provided by the pre-trained proxy model provided by (Bengio et al., 2021). TFBind8. Our goal is to generate a string of length 8 of nucleotides...Trabucco et al., 2022). RNA-Binding...introduced by Sinai et al. (2020).
Dataset Splits No The paper mentions 'training dataset D' and 'training rounds' but does not specify explicit train/validation/test splits with percentages, absolute counts, or well-defined standard split references.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software like 'ADAM' and 'MLP architecture', and notes following implementations from 'Shen et al. (2023)', but does not provide specific version numbers for any key software components (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup Yes For all tasks, we use ADAM (Kingma & Ba, 2015) optimizer with learning rate 1e-2 for log Zθ, 1e-4 for forward and backward policy. We use different reward exponent β...For LS-GFN, we have set the number of candidate samples as M = 4 and the local search interaction to I = 7 as default values. Table 3, which specifies 'Number of Layers', 'Hidden Units', 'Reward Exponent (β)', and 'Training Rounds (T)'.