BioBridge: Bridging Biomedical Foundation Models via Knowledge Graphs

Authors: Zifeng Wang, Zichen Wang, Balasubramaniam Srinivasan, Vassilis N. Ioannidis, Huzefa Rangwala, RISHITA ANUBHAI

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results demonstrate that Bio BRIDGE can beat the best baseline KG embedding methods (on average by 76.3%) in cross-modal retrieval tasks.
Researcher Affiliation Collaboration Zifeng Wang University of Illinois Urbana-Champaign zifengw2@illinois.edu Zichen Wang Amazon AWS AI zichewan@amazon.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code Yes Code is at https://github.com/Ryan Wang Zf/Bio Bridge.
Open Datasets Yes We draw a subset of Prime KG (Chandak et al., 2023) to build the training knowledge graph.
Dataset Splits Yes For each type of triple, we randomly sample 80%, 10%, and 10% for the train, validation, and test sets, respectively.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments, only mentioning general experimental setup.
Software Dependencies No The paper mentions specific models like ESM2-3B, Uni Mol, and Pub Med BERT, but does not provide version numbers for ancillary software dependencies such as deep learning frameworks or specific libraries.
Experiment Setup Yes As such, we keep the same set of hyperparameters for Bio Bridge across all experiments: batch size 4096, training epochs 50, and learning rate 1e-4.