Distributed Zero-Order Optimization under Adversarial Noise
Authors: Arya Akhavan, Massimiliano Pontil, Alexandre Tsybakov
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Appendix E, we present a numerical comparison between the algorithm proposed in this paper and that in Akhavan et al. [2020]. The results confirm our theoretical findings. The algorithm of this paper converges faster and the advantage is more pronounced as d increases. |
| Researcher Affiliation | Academia | Arya Akhavan CSML, Istituto Italiano di Tecnologia and CREST, ENSAE, IP Paris aria.akhavanfoomani@iit.it Massimiliano Pontil CSML, Istituto Italiano di Tecnologia and University College London massimiliano.pontil@iit.it Alexandre B. Tsybakov CREST, ENSAE, IP Paris alexandre.tsybakov@ensae.fr |
| Pseudocode | Yes | Algorithm 1 Distributed Zero-Order Gradient [...] Algorithm 2 Gradient Estimator with 2d Queries |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for their proposed method is open-source or publicly available. |
| Open Datasets | No | The paper describes theoretical algorithms and their convergence properties, and mentions a numerical comparison, but does not specify or provide access information for any publicly available or open dataset used in its evaluation. |
| Dataset Splits | No | The paper does not specify any dataset splits (e.g., training, validation, test percentages or counts) needed to reproduce experiments, as it does not describe specific experiments on datasets in detail. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its numerical comparisons or experiments. |
| Software Dependencies | No | The paper does not list specific software components with version numbers (e.g., Python, PyTorch, TensorFlow, or specific solvers) required to reproduce the work. |
| Experiment Setup | No | The paper specifies mathematical tuning parameters for the algorithm (ηt and ht) but does not provide concrete experimental setup details such as hyperparameters, learning rates, batch sizes, or optimizer settings for a numerical evaluation. |