Decentralize and Randomize: Faster Algorithm for Wasserstein Barycenters

Authors: Pavel Dvurechenskii, Darina Dvinskikh, Alexander Gasnikov, Cesar Uribe, Angelia Nedich

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate the effectiveness of our algorithm on the distributed computation of the regularized Wasserstein barycenter of a set of von Mises distributions for various network topologies and network sizes. Moreover, we show some initial results on the problem of image aggregation for two datasets, namely, a subset of the MNIST digit dataset [29] and subset of the IXI Magnetic Resonance dataset [1].
Researcher Affiliation Collaboration Pavel Dvurechensky, Darina Dvinskikh Weierstrass Institute for Applied Analysis and Stochastics, Institute for Information Transmission Problems RAS {pavel.dvurechensky,darina.dvinskikh}@wias-berlin.de Alexander Gasnikov Moscow Institute of Physics and Technology, Institute for Information Transmission Problems RAS gasnikov@yandex.ru César A. Uribe Massachusetts Institute of Technology cauribe@mit.edu Angelia Nedi c Arizona State University, Moscow Institute of Physics and Technology angelia.nedich@asu.edu
Pseudocode Yes Algorithm 1 Accelerated Primal-Dual Stochastic Gradient Method (APDSGD) Algorithm 2 Distributed computation of Wasserstein barycenter
Open Source Code No No explicit statement or link providing access to the source code for the methodology described in the paper was found.
Open Datasets Yes MNIST digit dataset [29] IXI Magnetic Resonance dataset [1]
Dataset Splits No No specific training/test/validation dataset splits (e.g., percentages, counts, or standard split references) are explicitly provided in the main text.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided.
Software Dependencies No No specific software dependencies with version numbers are provided.
Experiment Setup Yes We assume n = 100 and the support of p is a set of 100 equally spaced points on the segment [ 5, 5]. Figure 1 shows the performance of Algorithm 2 for four classes of networks: complete, cycle, star, and Erd os-Rényi. Moreover, we show the behavior for different network sizes, namely: m = 10, 100, 200, 500. Figure 1: Dual function value and distance to consensus for 200, 100, 10, 500 agents, Mk = 100 and γ = 0.1.