Gossip Dual Averaging for Decentralized Optimization of Pairwise Functions

Authors: Igor Colin, Aurelien Bellet, Joseph Salmon, Stéphan Clémençon

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present numerical simulations on Area Under the ROC Curve (AUC) maximization and metric learning problems which illustrate the practical interest of our approach.
Researcher Affiliation Academia Igor Colin IGOR.COLIN@TELECOM-PARISTECH.FR LTCI, CNRS, T el ecom Paris Tech, Unversit e Paris-Saclay, 75013 Paris, France, Aur elien Bellet AURELIEN.BELLET@INRIA.FR Magnet Team, INRIA Lille Nord Europe, 59650 Villeneuve d Ascq, France, Joseph Salmon JOSEPH.SALMON@TELECOM-PARISTECH.FR LTCI, CNRS, T el ecom Paris Tech, Unversit e Paris-Saclay, 75013 Paris, France, St ephan Cl emenc on STEPHAN.CLEMENCON@TELECOM-PARISTECH.FR LTCI, CNRS, T el ecom Paris Tech, Unversit e Paris-Saclay, 75013 Paris, France
Pseudocode Yes Algorithm 1 Stochastic dual averaging in the centralized setting, Algorithm 2 Gossip dual averaging for pairwise function in synchronous setting, Algorithm 3 Gossip dual averaging for pairwise function in asynchronous setting
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We use the Breast Cancer Wisconsin dataset, which consists of n = 699 points in d = 11 dimensions.
Dataset Splits No The paper mentions datasets used but does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or citations to predefined splits).
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory, or specific computing environments) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names or solver versions, that would be needed to replicate the experiment.
Experiment Setup Yes We initialize each θi to 0 and for each network, we run 50 times Algorithms 2 and 3 with γ(t) = 1/√t.