Dynamic Conditional Optimal Transport through Simulation-Free Flows

Authors: Gavin Kerrigan, Giosue Migliorini, Padhraic Smyth

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we demonstrate that our method is competitive on several challenging conditional generation tasks, including an infinite-dimensional inverse problem.
Researcher Affiliation Academia Gavin Kerrigan Department of Computer Science University of California, Irvine gavin.k@uci.edu Giosue Migliorini Department of Statistics University of California, Irvine gmiglior@uci.edu Padhraic Smyth Department of Computer Science University of California, Irvine smyth@ics.uci.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code for all of our experiments is available at https://github.com/GavinKerrigan/cot_fm
Open Datasets Yes The datasets moons, circles, swissroll are available through scikit-learn [Pedregosa et al., 2011]. ... The training and test datasets are generated following the same procedure as Hosseini et al. [2023].
Dataset Splits Yes For all datasets, we generate a training set (i.e., samples from the target distribution) of 20, 000 samples and 1, 000 held-out validation samples for model selection. ... We generate a training set of 10, 000 (y, u) pairs using the procedure described above and a held-out validation set of 10, 000 (y, u) pairs for model selection.
Hardware Specification No The paper states: 'All models can be trained on a single GPU with less than 24 GB of memory, and our experiments were parallelized over 8 such GPUs on a local server.' This description lacks specific model numbers for the GPUs or CPUs, or more detailed specifications of the 'local server'.
Software Dependencies No The paper mentions software packages like 'scikit-learn [Pedregosa et al., 2011]', 'POT Python package [Flamary et al., 2021]', 'PyMC Python package [Abril-Pla et al., 2023]', 'FEniCS [Alnæs et al., 2015]', 'torchcfm package [Tong et al., 2023]', and 'neuraloperator package [Kovachki et al., 2021]'. However, it does not explicitly provide specific version numbers for these software dependencies within the text.
Experiment Setup Yes Table 4: Hyperparameter grid used for random search of the FM and COT-FM models on the 2D and Lotka-Volterra datasets. ... For all of the models in consideration, we fix the architecture to be have hidden_channels = 64, projection_channels = 256, and 32 Fourier modes. We train each model for 1500 epochs, and hyperparameters for each architecture are selected as follows: ... We use the Adam optimizer where we only tune the learning rate, leaving all other settings as their defaults in pytorch.