Decentralized Gossip-Based Stochastic Bilevel Optimization over Communication Networks

Authors: Shuoguang Yang, Xuezhou Zhang, Mengdi Wang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our algorithm on the examples of hyperparameter tuning and decentralized reinforcement learning. Simulated experiments confirmed that our algorithm achieves the state-of-the-art training efficiency and test accuracy.
Researcher Affiliation Academia Shuoguang Yang IEDA, HKUST yangsg@ust.hk Xuezhou Zhang Princeton University xz7392@princeton.edu Mengdi Wang Princeton University mengdiw@princeton.edu
Pseudocode Yes Algorithm 1 Gossip-Based Decentralized Stochastic Bilevel Optimization
Open Source Code No We will provide the code via github in the camera-ready version stage.
Open Datasets Yes Hyper-parameter Optimization We consider federated hyper-parameter optimization (2) for a handwriting recognition problem over the Australia handwriting dataset (Chang and Lin, 2011)
Dataset Splits No Before testing Alg. 1, we first randomly split the dataset for training and validation, and then allocates both training and validation dataset over K agents.
Hardware Specification No We run the experiments on a single server desktop computer.
Software Dependencies No The paper does not specify any software names with version numbers.
Experiment Setup Yes We then run Algorithm 1 for T = 20000 iterations, with b = 200, t = 0.1 K/T, and βt = γt = 10