Distributed Distributionally Robust Optimization with Non-Convex Objectives

Authors: Yang Jiao, Kai Yang, Dongjin Song

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive empirical studies on real-world datasets demonstrate that the proposed method can not only achieve fast convergence, and remain robust against data heterogeneity as well as malicious attacks, but also tradeoff robustness with performance.
Researcher Affiliation Academia Yang Jiao Tongji University yangjiao@tongji.edu.cn Kai Yang Tongji University kaiyang@tongji.edu.cn Dongjin Song University of Connecticut dongjin.song@uconn.edu
Pseudocode Yes Algorithm 1 ASPIRE-EASE
Open Source Code No The paper states 'The references of the data used in this paper are added in Section 6.1.' in response to a question about code, data, and instructions needed to reproduce results, but it does not provide an explicit statement about the release of its source code or a direct link to a repository.
Open Datasets Yes We compare the proposed ASPIRE-EASE with baseline methods based on SHL [20], Person Activity [26], Single Chest-Mounted Accelerometer (SM-AC) [9] and Fashion MNIST [51] datasets.
Dataset Splits No The paper refers to 'Section C.2' for data splits and training details in the ethics statement, but these details are not provided within the main body of the paper.
Hardware Specification Yes We implement our model with Py Torch and conduct all the experiments on a server with two TITAN V GPUs.
Software Dependencies No The paper mentions 'We implement our model with Py Torch' but does not specify the version number for PyTorch or any other software dependencies.
Experiment Setup No The paper states 'The base models are trained with SGD. More details are given in Appendix C.' which implies specific hyperparameters are deferred to the appendix and not present in the main text.