Cross-Linked Unified Embedding for cross-modality representation learning
Authors: Xinming Tu, Zhi-Jie Cao, xia chenrui, Sara Mostafavi, Ge Gao
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We benchmark CLUE on multi-modal data from single cell measurements, illustrating CLUE s superior performance in all assessed categories of the Neur IPS 2021 Multimodal Single-cell Data Integration Competition. |
| Researcher Affiliation | Academia | Xinming Tu1,3, , Zhi-Jie Cao1,2 , Chen-Rui Xia1,2, Sara Mostafavi3, , Ge Gao1,2 1Peking University, 2Changping Laboratory, 3University of Washington |
| Pseudocode | No | The paper describes the model architecture and equations but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | CLUE for the single-cell multi-modality integration is incorporated in the scglue package All source code is available in https://github.com/gao-lab/GLUE. |
| Open Datasets | Yes | Neur IPS 2021 Multi-modality Competition Datasets[34] The Neur IPS multi-modal competition used data from two types of recent technologies for measuring single-cell multi-modal data: 10X genomics Multiome and CITE-seq[35]. |
| Dataset Splits | No | The Multiome training dataset includes 42,492 cells, and the test dattaset includes 20,009 cells,. The CITE-seq training dataset includes 66,175 cells, and the test dataset includes 15,066 cells. We leave one batch (53) out for testing, and use the other three batches (54, 55, 56) for training data. The paper provides training and test splits, but no explicit validation split information. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, or cloud provider instances used for running experiments. |
| Software Dependencies | No | The paper mentions software like 'scglue package' and 'UMAP', but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | No | The paper mentions that 'hyperparameter search' was performed, but it does not provide specific hyperparameter values or detailed training configurations (e.g., learning rates, batch sizes, epochs) in the main text. |