SGD Learns the Conjugate Kernel Class of the Network
Authors: Amit Daniely
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | From an empirical perspective, in [Daniely et al., 2017], it is shown that for standard convolutional networks the conjugate class contains functions whose performance is close to the performance of the function that is actually learned by the network. This is based on experiments on the standard CIFAR-10 dataset. |
| Researcher Affiliation | Collaboration | Amit Daniely Hebrew University and Google Research amit.daniely@mail.huji.ac.il |
| Pseudocode | Yes | Algorithm 1 Generic Neural Network Training |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | This is based on experiments on the standard CIFAR-10 dataset. |
| Dataset Splits | No | The paper mentions using CIFAR-10 for empirical perspective but does not specify details of training, validation, or test splits. The focus is on theoretical guarantees, not empirical reproduction details. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | While Algorithm 1 and the theorems define parameters and conditions (e.g., learning rate η, batch size m), they do not provide specific numerical values for hyperparameters or concrete system-level training settings as would typically be found in an experimental setup description. |