Robust Optimization for Non-Convex Objectives

Authors: Robert S. Chen, Brendan Lucier, Yaron Singer, Vasilis Syrgkanis

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach experimentally on corrupted character classification and robust influence maximization in networks.
Researcher Affiliation Collaboration Robert Chen Computer Science Harvard University Brendan Lucier Microsoft Research New England Yaron Singer Computer Science Harvard University Vasilis Syrgkanis Microsoft Research New England
Pseudocode Yes Algorithm 1 Oracle Efficient Improper Robust Optimization; Algorithm 2 Greedy stochastic Oracle for Submodular Maximization Mgreedy
Open Source Code Yes Code used to implement the algorithms and run the experiments is available at https://github.com/ 12degrees/Robust-Classification/.
Open Datasets Yes We use the MNIST handwritten digits data set containing 55000 training images, 5000 validation images, and 10000 test images... The Wikipedia Vote Graph [14]
Dataset Splits Yes We use the MNIST handwritten digits data set containing 55000 training images, 5000 validation images, and 10000 test images
Hardware Specification No The paper does not provide specific hardware details such as CPU, GPU models, or memory specifications used for the experiments.
Software Dependencies No The paper mentions general software components like "stochastic gradient descent" and "neural network" but does not specify versions for programming languages, libraries, or frameworks (e.g., Python, TensorFlow, PyTorch).
Experiment Setup Yes The network is trained using Gradient Descent with learning parameter 0.5 through 500 iterations of mini-batches of size 100. In our experiments, we consider four types of corruption (m = 4). In Experiment A, the parameters are |V | = 7115, |E| = 103689, m = 10, p = 0.01 and k = 10.