DynGFN: Towards Bayesian Inference of Gene Regulatory Networks with GFlowNets

Authors: Lazar Atanackovic, Alexander Tong, Bo Wang, Leo J Lee, Yoshua Bengio, Jason S. Hartford

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results indicate that our method learns posteriors that better encapsulate the distributions of cyclic structures compared to counterpart state-of-the-art Bayesian structure learning approaches. We empirically evaluated Dyn GFN on synthetic dynamic data designed to induce highly multimodal posteriors over graphs. 6 Experimental Results In Table 1 we show results of our synthetic experiments for learning posteriors over multi-modal distributions of cyclic graphs. We observe the Dyn GFN is most competitive on both synthetic systems for modelling the true posterior over structure.
Researcher Affiliation Collaboration 1University of Toronto, 2Vector Institute 3Mila Quebec AI Institute, 4Universit e de Montr eal 5University Health Network, 6CIFAR Fellow, 7Valence Labs
Pseudocode Yes Algorithm 1 Batch update training of Dyn GFN
Open Source Code Yes Our model is implemented in Pytorch and Pytorch Lightning and is available at https://github. com/lazaratan/dyn-gfn.
Open Datasets Yes We start with the processed data from [46]. Riba, A., Oravecz, A., Durik, M., Jim enez, S., Alunni, V., Cerciat, M., Jung, M., Keime, C., Keyes, W. M., and Molina, N. Cell cycle gene regulation dynamics revealed by RNA velocity and deep-learning. Nature Communications, 13, 2022.
Dataset Splits No Table 7: Hyper-parameters for Dyn GFN. We define learning rate as ϵ, mtrain as number of training samples, and meval the number of evaluation samples. ... Results are reported on held out test data over 5 model seeds. The paper specifies 'mtrain' and 'meval' samples, and refers to 'held out test data', but does not explicitly define distinct 'training/validation/test' splits with clear percentages or counts for a separate validation set.
Hardware Specification Yes Models were trained on a heterogeneous mix of HPC clusters for a total of ~1,000 GPU hours primarily on NVIDIA RTX8000 GPUs.
Software Dependencies No Our model is implemented in Pytorch and Pytorch Lightning (No version numbers provided).
Experiment Setup Yes Table 5: Hyper-parameters for Dyn BCD. Table 6: Hyper-parameters for Dyn Di BS. Table 7: Hyper-parameters for Dyn GFN. These tables list specific values for learning rate, λ0, T, α, γ, mtrain, meval.