RNA Secondary Structure Representation Network for RNA-proteins Binding Prediction
Authors: Ziyi Liu, Fulin Luo, Bo Du362-370
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the proposed method outperforms a few state-of-the-art methods on the benchmark datasets and gets a higher improvement on the small-size data. |
| Researcher Affiliation | Academia | 1 National Engineering Research Center for Multimedia Software, Institute of Artiļ¬cial Intelligence, School of Computer Science, and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, China 2 State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is available at https:// github.com/zini BRC/RNASSR-Net. |
| Open Datasets | Yes | We compare the performance of our method with other baselines on the benchmark RBP binding dataset in Graph Prot originally created by (Maticzka et al. 2014). This dataset consists of 24 sets of HITS-CLIP-, PAR-CLIP- and iCLIPderived binding sites, where 23 sets were derived from do Ri NA (Anders et al. 2011) and PTB HITS-CLIP binding sites was taken from (Xue et al. 2009). |
| Dataset Splits | Yes | To investigate the performance of our model, we randomly select 90% of origin training set from RBP-24 as the training set and the remaining 10% as the validation set. |
| Hardware Specification | Yes | We run our experiments on a Ubuntu server with NVIDIA GTX 2080Ti GPU with memory 12 GB. |
| Software Dependencies | No | The paper mentions using 'Py Torch' and 'DGL' (an open source framework for graph neural networks) but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We train our model for a maximum of 100 epochs using Adam (Kingma and Ba 2015). For each epoch, we set the size of batched training data as 128. For the learning rate parameters, we set the initial learning rate as 0.001 and the learning rate reduce factor as 0.5. [...] We set the weight decay and dropout as 0.01 and 0.25 respectively. |