Hierarchical Optimal Transport for Multimodal Distribution Alignment

Authors: John Lee, Max Dabagia, Eva Dyer, Christopher Rozell

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply this method to synthetic datasets that model data as mixtures of low-rank Gaussians and study the impact that different geometric properties of the data have on alignment. Next, we applied our approach to a neural decoding application where the goal is to predict movement directions and instantaneous velocities from populations of neurons in the macaque primary motor cortex. Our results demonstrate that when clustered structure exists in datasets, and is consistent across trials or time points, a hierarchical alignment strategy that leverages such structure can provide significant improvements in cross-domain alignment.
Researcher Affiliation Academia School of Electrical and Computer Engineering, Coulter Department of Biomedical Engineering Georgia Institute of Technology, Atlanta, GA, 30332 USA {john.lee, maxdabagia, evadyer, crozell}@gatech.edu
Pseudocode Yes Algorithm 1 Hierarchical Wasserstein Alignment (Hi WA) Algorithm
Open Source Code Yes MATLAB code can be found at https://github.com/siplab-gt/hiwa-matlab. Neural datasets and Python code are provided at http://nerdslab.github.io/neuralign
Open Datasets Yes Neural datasets and Python code are provided at http://nerdslab.github.io/neuralign
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits, nor does it refer to predefined splits with citations or detailed splitting methodologies.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, such as specific GPU or CPU models, or cloud resources with specifications.
Software Dependencies No The paper mentions 'MATLAB code' and 'Python code' but does not provide specific version numbers for these or any other key software components, libraries, or solvers.
Experiment Setup No The paper describes general parameters of the Hi WA algorithm (entropic parameters γ1, γ2 > 0, ADMM parameter µ > 0) and dataset characteristics, but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations used in its experiments.