Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Adaptive Approximation and Generalization of Deep Neural Network with Intrinsic Dimensionality
Authors: Ryumei Nakada, Masaaki Imaizumi
JMLR 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct a numerical simulation to validate the theoretical results. |
| Researcher Affiliation | Academia | Department of Statistics, Rutgers University, USA The University of Tokyo, Japan Komaba Institute for Science, The University of Tokyo, Japan Center for Advanced Intelligence Project, RIKEN, Japan |
| Pseudocode | No | No explicit pseudocode or algorithm blocks are present in the paper. The methodology is described through mathematical formulations and proofs. |
| Open Source Code | No | No statement regarding the public availability of source code or links to a code repository are provided. |
| Open Datasets | Yes | MNIST dataset (Le Cun et al., 2015) and object images using the Canadian Institute for Advanced Research (v) dataset (Krizhevsky and Hinton, 2009). ...We compare the performance of DNNs using the modified National Institute of Standards and Technology (MNIST) dataset. |
| Dataset Splits | No | The paper mentions using "validation data" and "test errors" but does not provide specific percentages or absolute sample counts for how the data was split into training, validation, and test sets. For example, in Section 6.3, it states "We measure the error based on the L2-norm. We replicate the setting 10 times, and discard two replications with the first and second-largest test errors..." without detailing the test set's origin or size. |
| Hardware Specification | No | No specific hardware details such as GPU/CPU models, processor types, or memory specifications are mentioned for running the experiments. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow) are mentioned in the paper. It mentions using 'Adam (Kingma and Ba, 2015)' as an optimizer, but without a software library version. |
| Experiment Setup | Yes | For the learning process, a DNN architecture with four layers and the Re LU activation function are employed, and each layer has D units except the output layer. For optimization, we employ Adam (Kingma and Ba, 2015) with the following hyper-parameters; 0.001 learning rate and (β1, β2) = (0.9, 0.999). |