Implicit Regularization Paths of Weighted Neural Representations

Authors: Jin-Hong Du, Pratik Patil

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As a practical consequence of the path equivalences, we develop an efficient cross-validation method for tuning and apply it to subsampled pretrained representations across several models (e.g., Res Net-50) and datasets (e.g., CIFAR-100).
Researcher Affiliation Academia Jin-Hong Du Carnegie Mellon University jinhongd@andrew.cmu.edu Pratik Patil University of California Berkeley pratikpatil@berkeley.edu
Pseudocode Yes Algorithm 1 Meta-algorithm for tuning of ensemble sizes and subsample matrices.
Open Source Code Yes The code for reproducing the results of this paper can be found at https://jaydu1.github.io/overparameterized-ensembling/weighted-neural.
Open Datasets Yes We consider Res Net-{18, 34, 50, 101} applied to the CIFAR-{10,100} [9], Fashion-MNIST [21], Flowers-102 [14], and Food-101 [4] datasets.
Dataset Splits Yes For datasets with different data aspect ratios, we stratify 10% of the training samples as the training set for the CIFAR-100 dataset. The training and predicting errors are the mean square errors on the training and test sets, respectively, aggregated over all the labels.
Hardware Specification No The paper mentions 'the ACCESS allocation MTH230020 provided for some of the experiments performed on the Bridges2 system at the Pittsburgh Supercomputing Center' but does not provide specific hardware details like GPU/CPU models or memory specifications.
Software Dependencies No The paper mentions 'The improved CV method is implemented in the Python library [24]' but does not provide specific version numbers for Python or other software dependencies.
Experiment Setup Yes The risk estimates are computed based on M0 = 25 base estimators using Algorithm 1 with λ = 10 3.