Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Gradient-guided discrete walk-jump sampling for biological sequence generation
Authors: Zarif Ikram, Dianbo Liu, M Saifur Rahman
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We showcase our method in two different modalities: discrete image and biological sequence involving antibody and peptide sequence generation tasks in the single objective and multi-objective setting. Through evaluation on these tasks, we show that our method generates high-quality samples that are well-optimized for specific tasks. |
| Researcher Affiliation | Academia | Zarif Ikram EMAIL CSE, BUET National University of Singapore Dianbo Liu EMAIL National University of Singapore M Saifur Rahman EMAIL CSE, BUET |
| Pseudocode | Yes | Algorithm 1 gradient-guided discrete walk-jump sampling |
| Open Source Code | Yes | Training and evaluation code, training data, and model checkpoints are available at https://github.com/zarifikram/gg-d WJS/. |
| Open Datasets | Yes | To validate our method, we compare it against d WJS for the binarized static MNIST image generation task (Salakhutdinov & Murray, 2008; Larochelle & Bengio, 2008). |
| Dataset Splits | Yes | Then, we split the dataset into two parts: D1 and D2, following Angermueller et al. (2019) where D1 is visible to our algorithm and D2 is used to train the oracle for validation of the results. To split the dataset, we follow the same principle as Jain et al. (2022): for any peptide x D1, there are no peptides x D2 such that x belongs to x s group and vice versa. This split yields 3219 AMPs and 4611 non-AMPs in D1. |
| Hardware Specification | Yes | We conducted the training using four NVidia A100 GPUs. |
| Software Dependencies | Yes | To train the model from scratch, we utilize a batch size of 64 and the Adam W optimizer (Loshchilov & Hutter, 2019) in Py Torch (Paszke et al., 2019) with early stopping. The training parameters include a learning rate of 10 4 and a weight decay of 0.01. ... We use the kernel density estimation (KDE) implementation by Scikit-learn (Pedregosa et al., 2011) |
| Experiment Setup | Yes | All our models are trained with maximum 40 epochs with early stopping and with a learning rate of 10 4, a weight decay of 0.01, and a batch size of 64. ... Table 9: Hyperparameters used for d WJS and gg-d WJS sampling. σ 1 δ 0.5 γ 1 K 40 λ (% beta sheet) 100 λ (instability index) 1 |