Modelling Microbial Communities with Graph Neural Networks

Authors: Albane Ruaud, Cansu Sancaktar, Marco Bagatella, Christoph Ratzke, Georg Martius

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we model bacterial communities directly from their genomes using graph neural networks (GNNs)... On two real-world datasets, we show for the first time generalization to unseen bacteria and different community structures. To investigate the prediction results more deeply, we create a simulation for flexible data generation and analyze effects of bacteria interaction strength, community size, and training data amount.
Researcher Affiliation Academia 1Cluster of Excellence ML for Science, University of Tübingen, Tübingen, Germany 2Autonomous Learning group, Max Planck Institute for Intelligent Systems, Tübingen, Germany 3ETH Zürich, Zürich, Switzerland 4Cluster of Excellence CMFI, University of Tübingen, Tübingen, Germany.
Pseudocode No The paper describes the algorithms and architectures (GNNs, MPNNs, Graph SAGE, MPGNN) in detail but does not include any formal pseudocode or algorithm blocks.
Open Source Code No The paper mentions that datasets can be found on their project webpage but does not provide a statement or link for the open-source code of their methodology.
Open Datasets Yes We use two publicly available datasets independently recorded by separate laboratories; we describe them here, provide more details in Appendix A.2. Datasets can be found on our project webpage https:// sites.google.com/view/microbegnn.
Dataset Splits Yes Cross-validation (CV) was performed on 5 train/validation/test data splits with 5 model initialization seeds for hyperparameter tuning (see Supplementary Table S2). data were split into 80/10/10 % train/validation/test sets; five splits were created.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper mentions software like "Adam optimizer (Kingma & Ba, 2015)" and "PyTorch Geometric Python package (Fey & Lenssen, 2019)" but does not specify version numbers for these software components or programming languages.
Experiment Setup Yes For all models, the batch size was 16, training samples were shuffled for making batches, and the learning rate was set to 0.005 for the Adam optimizer (Kingma & Ba, 2015). We trained models for 500 epochs... We used 2 convolutional layers/message-passing steps with 50 and 100 hidden features for FRIEDMAN2017 and BARANWALCLARK2022 data, respectively.