Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization
Authors: Tianyi Lin, Zeyu Zheng, Michael Jordan
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we demonstrate the effectiveness of 2-SGFM on training Re LU neural networks with the MINST dataset. |
| Researcher Affiliation | Academia | University of California, Berkeley {darren_lin,zyzheng}@berkeley.edu, jordan@cs.berkeley.edu |
| Pseudocode | Yes | Algorithm 1 Gradient-Free Method (GFM) |
| Open Source Code | No | The paper includes a checklist at the end stating 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]', but the main body of the paper does not provide a specific URL or explicit statement about code availability. |
| Open Datasets | Yes | The dataset we use is the MNIST dataset1 [60] |
| Dataset Splits | No | The paper uses the MNIST dataset but does not explicitly state the training, validation, or test data splits, nor does it refer to predefined splits with citations within the main text. |
| Hardware Specification | Yes | All the experiments are implemented using Py Torch [73] on a workstation with a 2.6 GHz Intel Core i7 and 16GB memory. |
| Software Dependencies | No | All the experiments are implemented using Py Torch [73] on a workstation with a 2.6 GHz Intel Core i7 and 16GB memory. |
| Experiment Setup | Yes | We set the learning rate η as 0.001. |