Unbalanced CO-optimal Transport
Authors: Quang Huy Tran, Hicham Janati, Nicolas Courty, Rémi Flamary, Ievgen Redko, Pinar Demetci, Ritambhara Singh
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | With this result in hand, we provide empirical evidence of this robustness for the challenging tasks of heterogeneous domain adaptation with and without varying proportions of classes and simultaneous alignment of samples and features across single-cell measurements. Our theoretical findings are showcased in unsupervised heterogeneous domain adaptation and single-cell multi-omic data alignment, demonstrating a very competitive performance. |
| Researcher Affiliation | Academia | 1Université Bretagne Sud, IRISA 2CMAP, Ecole Polytechnique, IP Paris 3LTCI, Télécom Paris, IP Paris 4Univ. Lyon, UJM-Saint-Etienne, CNRS, UMR 5516 5Center for Computational Molecular Biology, Brown University 6Department of Computer Science, Brown University |
| Pseudocode | Yes | Algorithm 1: BCD algorithm to solve UCOOT |
| Open Source Code | No | The paper does not provide a specific link or explicit statement about releasing the source code for the proposed UCOOT methodology. |
| Open Datasets | Yes | MNIST dataset, We consider the Caltech-Office dataset (Saenko et al. 2010), For demonstration, we choose a dataset generated by the CITE-seq experiment (Stoeckius et al. 2017) |
| Dataset Splits | No | The paper mentions hyper-parameter validation: 'The hyper-parameters for each method are validated on a unique pair of datasets (W W), then fixed for all other pairs in order to provide truly unsupervised HDA generalization.' However, it does not explicitly provide details about specific training/validation/test dataset splits (percentages, counts, or explicit standard splits) for general model training or evaluation within the main text. |
| Hardware Specification | No | The paper mentions running computations on 'GPUs' but does not provide specific hardware details such as GPU models (e.g., NVIDIA A100), CPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions using specific algorithms like 'Sinkhorn’s algorithm' and 'non-negative penalized regression (NNPR)' and refers to pre-trained models like 'Google Net' and 'Caffe Net', but it does not specify any software libraries or dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8). |
| Experiment Setup | Yes | More precisely, given a hyperparameter ε 0, discrete UCOOT can be written as: ... + λ1 KL(πs1n1 πf1d1|u1) + λ2 KL(πs 1n2 πf 1d2|u2) + εKL(πs πf|µs 1 µs 2 µf 1 µf 2). The hyper-parameters for each method are validated on a unique pair of datasets (W W), then fixed for all other pairs in order to provide truly unsupervised HDA generalization. The results are presented after hyperparameter tuning both methods with similar grid size per hyperparameter (see Experimental Details in Appendix). |