Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence
Authors: Diyuan Wu, Vyacheslav Kungurtsev, Marco Mondelli
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 8 Numerical results Experimental setup. We train a two-layer and a three-layer fully connected neural network in the meanfield regime on the MNIST dataset. The training algorithm is stochastic gradient descent with momentum, and we evaluate the dropout stability of the learnt models... Experimental results. In Figure 1, we plot the log of the dropout error defined in (22) as a function of the number of neurons in each layer. |
| Researcher Affiliation | Academia | Diyuan Wu EMAIL Institute of Science and Technology Austria (ISTA) Vyacheslav Kungurtsev EMAIL Czech Technical University Marco Mondelli EMAIL Institute of Science and Technology Austria (ISTA) |
| Pseudocode | No | The paper contains extensive mathematical derivations and descriptions of dynamics, but it does not include any explicitly labeled pseudocode or algorithm blocks with structured, code-like formatting. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide links to a code repository. |
| Open Datasets | Yes | We train a two-layer and a three-layer fully connected neural network in the meanfield regime on the MNIST dataset. |
| Dataset Splits | No | The paper states using the MNIST dataset and training parameters like batch size and epochs, but it does not specify any explicit training/test/validation splits for the dataset. |
| Hardware Specification | No | The paper describes the experimental setup including the dataset and training parameters, but it does not specify any hardware used (e.g., CPU, GPU models, or cloud computing resources). |
| Software Dependencies | No | We use the PyTorch default initialization, pick the learning rate ε to be 0.05 and the momentum to be 0.9 (which implies that γ = 2). |
| Experiment Setup | Yes | We use the PyTorch default initialization, pick the learning rate ε to be 0.05 and the momentum to be 0.9 (which implies that γ = 2). We rescale the learning rate, so that the scaling of the gradient does not depend on n, as required by our theory. The batch size is 100 and we train for 25 epochs, which means that the neural network is trained for 15000 steps (each epoch contain 600 steps and there are 25 epochs). |