Cross-Gate MLP with Protein Complex Invariant Embedding Is a One-Shot Antibody Designer

Authors: Cheng Tan, Zhangyang Gao, Lirong Wu, Jun Xia, Jiangbin Zheng, Xihong Yang, Yue Liu, Bozhen Hu, Stan Z. Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted to evaluate our results at both the sequence and structure levels, which demonstrate that our model achieves superior performance compared to the state-of-the-art antibody CDR design methods.
Researcher Affiliation Academia 1Zhejiang University 2AI Lab, Research Center for Industries of the Future, Westlake University 3College of Computer, National University of Defense Technology
Pseudocode No The paper describes the model architecture and equations, and provides schematic diagrams, but does not include a formal pseudocode or algorithm block.
Open Source Code No The paper does not contain an explicit statement or a link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We evaluate our model on three challenging antibody design tasks using the common experimental setups from previous works (Jin et al. 2022; Kong, Huang, and Liu 2023a; Fu and Sun 2022). These tasks include: (i) generative task on the Structural Antibody Database (Dunbar et al. 2014)
Dataset Splits Yes The dataset is split into training, validation, and testing sets according to the clustering of CDRs to maintain the generalization test. ... The clusters are split into training, validation, and testing sets with a ratio of 8:1:1. We report the results of 10-fold cross-validation in Table 1.
Hardware Specification No The paper does not provide specific details about the hardware used to run its experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using the 'Adam optimizer' but does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the experiments.
Experiment Setup Yes We used the default setup of each method, training the models for 20 epochs with Adam optimizer and a learning rate of 10-3. We used the checkpoint with the lowest validation loss for testing. ... The overall loss is L = Lseq + λLstruct, where λ = 0.8 is a weight hyperparameter that balances the sequence and structure loss.