A Computational Framework for Solving Wasserstein Lagrangian Flows
Authors: Kirill Neklyudov, Rob Brekelmans, Alexander Tong, Lazar Atanackovic, Qiang Liu, Alireza Makhzani
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We showcase the versatility of the proposed framework by outperforming previous approaches for the single-cell trajectory inference, where incorporating prior knowledge into the dynamics is crucial for correct predictions. and In Table 1, we report results on EB, Cite, and Multi datasets. |
| Researcher Affiliation | Academia | 1Universit e de Montr eal 2Mila Quebec AI Institute 3Vector Institute 4University of Toronto 5University of Texas at Austin. |
| Pseudocode | Yes | Algorithm 1 Learning Wasserstein Lagrangian Flows |
| Open Source Code | Yes | The code reproducing the experiments is available at https://github.com/necludov/wl-mechanics |
| Open Datasets | Yes | The EB dataset Moon et al. (2019) and the CITE-seq (Cite) and Multiome (Multi) datasets (Burkhardt et al., 2022) are repurposed and preprocessed by Tong et al. (2023b;a) for the task of trajectory inference. and Further details regarding the raw dataset can be found at the competition website. 4 https://www.kaggle.com/competitions/open-problems-multimodal/data |
| Dataset Splits | Yes | For all experiments, we train k independent models over k partitions of the single-cell datasets. The training data partition is determined by a left out intermediary timepoint. We then average test performance over the k independent model predictions computed on the respective left-out marginals. For experiments using the EB dataset, we train 3 independent models using marginals from timepoint partitions [1, 3, 4, 5], [1, 2, 4, 5], [1, 2, 3, 5] and evaluate each model using the respective left-out marginals at timepoints [2], [3], [4]. Likewise, for experiments using Cite and Multi datasets, we train 2 independent models using marginals from timepoint partitions [2, 4, 7], [2, 3, 7] and evaluate each model using the respective left-out marginals at timepoints [3], [4]. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using Multi-Layer Perceptron (MLP) architectures and cites the 'Pot: Python optimal transport' library, but it does not specify software versions for any libraries, frameworks, or programming languages used in the experiments. |
| Experiment Setup | No | The paper states, 'For detailed description of the architectures and hyperparameters we refer the reader to the code supplemented,' indicating that specific experimental setup details such as concrete hyperparameter values or training configurations are not provided in the main text. |