BayReL: Bayesian Relational Learning for Multi-omics Data Integration

Authors: Ehsan Hajiramezanali, Arman Hasanzadeh, Nick Duffield, Krishna Narayanan, Xiaoning Qian

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on several real-world datasets demonstrate enhanced performance of Bay Re L in inferring meaningful interactions compared to existing baselines. We test the performance of Bay Re L on capturing meaningful inter-relations across views on three real-world datasets.
Researcher Affiliation Academia Department of Electrical and Computer Engineering, Texas A&M University {ehsanr, armanihm, duffieldng, krn, xqian}@tamu.edu
Pseudocode No The paper includes a graphical model (Figure 1) but does not provide any pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/ehsanhajiramezanali/BayReL
Open Datasets Yes We test the performance of Bay Re L on capturing meaningful inter-relations across views on three real-world datasets. The dataset includes 16S ribosomal RNA (r RNA) sequencing and metabolomics for 172 patients with CF. For mi RNA-mi RNA interaction networks, we construct a weighted network based on the functional similarity between pairs of mi RNAs using MISIM v2.0 [Li et al., 2019]. The TCGA data contains both mi RNA and gene expression data for 1156 breast cancer (BRCA) tumor patients. This in vitro drug sensitivity study has both gene expression and drug sensitivity data to a panel of 160 chemotherapy drugs and targeted inhibitors across 30 AML patients [Lee et al., 2018].
Dataset Splits Yes Table 2 shows the prediction sensitivity of both models while using different percentage of samples to train the models. Using 50% of all the samples, while the average prediction sensitivity of Bay Re L reduces less than 2% in the worst case scenario (i.e. average node density 0.20), BCCA s performance degraded around 6%. ... The KL divergence values between two inferred bipartite graphs for Bay Re L are 0.35 and 0.32 when using 25% and 50% of samples, respectively.
Hardware Specification No The paper does not specify any hardware used for running the experiments, such as specific GPU or CPU models.
Software Dependencies No We implement our model in Tensor Flow [Abadi et al., 2015]. However, no specific version number for TensorFlow or any other software dependency is provided.
Experiment Setup Yes For all datasets, we used the same architecture for Bay Re L as follows: Two-layer GCNs are used with a shared 16-dimensional first layer and separate 8-dimensional output layers as ϕemb,µ v , and ϕemb,σ v . We use the same embedding function for all views. Inner-product decoder is used for ϕsim v . Also, we employ a one-layer 8dimensional GCN as ϕprior to learn the mean of the prior. We set the variance of the prior to be one. We deploy view-specific two-layer fully connected neural networks (FCNNs) with 16 and 8 dimensional layers, followed by a two-layer GCN (16 and 8 dimensional layers) shared across views as ϕpost,µ v , and ϕpost,σ v . Finally, we use a view-specific three-layer FCNN (8, input_dim, and input_dim dimensional layers) as ϕdec v . Re LU activation functions are used. The model is trained with Adam optimizer. Also in our experiments, we multiply the term log pθ(Zv | G, A, U) in the objective function by a scalar α = 30 during training in order to infer more accurate inter-relations. To have a fair comparison we choose the same latent dimension for BCCA as Bay Re L, i.e. 8.