Distributed Learning with Strategic Users: A Repeated Game Approach

Authors: Abdullah B Akbay, Junshan Zhang5976-5983

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Theoretical and empirical analysis show that the fusion center can incentivize the strategic users to cooperate and report informative gradient updates, thus ensuring the convergence. (...) In this section, we evaluate the performance of SGD-SU (5) using real-life datasets.
Researcher Affiliation Academia Abdullah B Akbay, Junshan Zhang Arizona State University aakbay@asu.edu, junshan.zhang@asu.edu
Pseudocode Yes Algorithm 1: Stochastic Gradient Descent with Strategic Users (SGD-SU)
Open Source Code No The paper does not provide any explicit statement about releasing source code for the methodology, nor does it include a link to a code repository.
Open Datasets Yes In our first set of experiments, we consider a binary logistic classification problem and use the KDD-Cup 04 dataset (Caruana, Joachims, and Backstrom 2004). (...) In our second set of experiments, we consider a neural network trained on the MNIST dataset (Lecun et al. 1998).
Dataset Splits No The paper mentions using KDD-Cup 04 and MNIST datasets but does not provide specific details on how the data was split into training, validation, and testing sets, or if cross-validation was used.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory, or cloud instance types) used to run the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) that would be needed to replicate the experiments.
Experiment Setup Yes The number of users is chosen as K = 50 and mini-batch size is s = 10. (...) with a fixed step-size η satisfying 0 < η ≤ 1/L(1 + HT ).