Antibody-Antigen Docking and Design via Hierarchical Structure Refinement

Authors: Wengong Jin, Dr.Regina Barzilay, Tommi Jaakkola

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on standard paratope docking and design benchmarks (Adolf-Bryfogle et al., 2018). In terms of docking, we compare HSRN against HDOCK (Yan et al., 2020) and combined it with Ig Fold (Ruffolo & Gray, 2022) for end-to-end paratope folding and docking. In terms of generation, we compare HSRN against standard sequence-based generative models and a state-of-the-art structure-based methods (Jin et al., 2021). HSRN significantly outperformed all baselines in both settings, with over 50% absolute improvement in docking success rate.
Researcher Affiliation Academia 1Eric and Wendy Schmidt Center, Broad Institute of MIT and Harvard 2CSAIL, Massachusetts Institute of Technology.
Pseudocode No The paper describes its methods and algorithms using text and mathematical equations, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code is available at github.com/wengong-jin/abdockgen
Open Datasets Yes Our training data comes from the Structural Antibody Database (SAb Dab) (Dunbar et al., 2014)
Dataset Splits No The paper mentions the training set size, but it does not specify any explicit validation set splits (e.g., percentages, sample counts) for reproducibility.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, memory, or specific computing environments) used for running the experiments.
Software Dependencies No The paper mentions using an "Adam optimizer" but does not specify any software names with version numbers for other libraries or dependencies, such as Python, PyTorch, or TensorFlow versions.
Experiment Setup Yes Each MPN in our hierarchical encoder contains four message passing layers with a hidden dimension of 256. The docking module performs eight iterations of structure refinement. All models are trained by an Adam optimizer for 20 epochs with 10% dropout.