Attend and Predict: Understanding Gene Regulation by Selective Attention on Chromatin

Authors: Ritambhara Singh, Jack Lanchantin, Arshdeep Sekhon, Yanjun Qi

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the model across 56 different cell types (tasks) in humans. Not only is the proposed architecture more accurate, but its attention scores provide a better interpretation than state-of-the-art feature visualization methods such as saliency maps. Attentive Chrome provides more accurate predictions than state-of-the-art baselines. Using datasets from REMC, we evaluate Attentive Chrome on 56 different cell types (tasks).
Researcher Affiliation Academia Ritambhara Singh, Jack Lanchantin, Arshdeep Sekhon, Yanjun Qi Department of Computer Science University of Virginia yanjun@virginia.edu
Pseudocode No The paper describes the architecture and components in detail using text and diagrams, but does not include any pseudocode or algorithm blocks.
Open Source Code No The footnote states “Code shared at www.deepchrome.org”. DeepChrome is a baseline model mentioned in the paper, not the main Attentive Chrome methodology described. Therefore, the code for the specific methodology of this paper (Attentive Chrome) is not provided.
Open Datasets Yes Following Deep Chrome [29], we downloaded gene expression levels and signal data of five core HM marks for 56 different cell types archived by the REMC database [18].
Dataset Splits Yes For each cell type, we divided our set of 19,802 gene samples into three separate, but equal-size folds for training (6601 genes), validation (6601 genes), and testing (6600 genes) respectively.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU models, CPU types).
Software Dependencies No The paper describes the deep learning models and components used (e.g., LSTM, CNN) but does not list specific software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes Model Hyperparameters: For Attentive Chrome variations, we set the bin-level LSTM embedding size d to 32 and the HM-level LSTM embedding size as 16. Since we implement a bi-directional LSTM, this results in each embedding vector ht as size 64 and embedding vector mj as size 32. Therefore, we set the context vectors, Wb and Ws, to size 64 and 32 respectively.