Unsupervised Deep Embedded Fusion Representation of Single-Cell Transcriptomics
Authors: Yue Cheng, Yanchi Su, Zhuohan Yu, Yanchun Liang, Ka-Chun Wong, Xiangtao Li
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted comprehensive experiments on 15 single-cell RNA-seq datasets from different sequencing platforms and demonstrated the superiority of sc DEFR over a variety of state-of-the-art methods. |
| Researcher Affiliation | Academia | 1School of Artificial Intelligence, Jilin University, Jilin, China 2Zhuhai Laboratory of Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Zhuhai College of Science and Technology, Zhuhai 519041, China 3Department of Computer Science, City University of Hong Kong, Hong Kong SAR |
| Pseudocode | No | The paper describes the methods using text and mathematical equations, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a direct link to a source-code repository, nor does it contain an unambiguous statement that the code for the described methodology is released or available. |
| Open Datasets | Yes | To demonstrate the effectiveness of sc DEFR, we applied our method to fifteen real sc RNA-seq datasets collected from (Yu et al. 2022). These fifteen real datasets were generated from seven different representative sequencing platforms and originate from several species. The detailed information is described in Table 1. |
| Dataset Splits | No | The paper describes data preprocessing and the use of datasets but does not provide explicit details about train, validation, and test splits (e.g., percentages, sample counts, or references to predefined splits). |
| Hardware Specification | Yes | Finally, our experiments are conducted on an Ubuntu server with an NVIDIA Quadro RTX 6000 GPU and 24GB of memory. |
| Software Dependencies | No | The paper mentions using the "Adam algorithm" as an optimizer and the "scanpy package" but does not provide specific version numbers for these or any other software components. |
| Experiment Setup | Yes | In our study, we constructed the cell graph with the KNN algorithm with the nearest neighbor parameter at K = 10. In addition, we constructed the network using the combined fusion cell topology encoder and transcriptomics profile-based graph encoder, and the linear fusion parameter α was set to 0.1; each layer was configured with 1024, 128, and 24 nodes; and the layer of the fully connected decoder was configured with a symmetric encoder form. In particular, our algorithm consisted of pre-training and training, both of which were set to 250 epochs. The Adam algorithm was used as an optimizer, with a learning rate of 5e-5 for pretraining and 1e-7 for formal training. The weight coefficients for objective functions {γ1, γ2, γ3, γ4} are respectively set to {0.3, 0.1, 0.5, 0.1}. |