Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks

Authors: Zixuan Zhang, Kaiqi Zhang, Minshuo Chen, Yuma Takeda, Mengdi Wang, Tuo Zhao, Yu-Xiang Wang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we validate our theoretical findings with numerical experiments. We focus on nonparametric regression problems for simplicity and consider the following function f0 : RD R: f0(x) = f0(Ux) = f0( x)
Researcher Affiliation Academia Zixuan Zhang Georgia Tech zzhang3105@gatech.edu Kaiqi Zhang UC Santa Barbara kzhang70@ucsb.edu Minshuo Chen Northwestern University minshuo.chen@northwestern.edu Yuma Takeda University of Tokyo utklav1511@gmail.com Mengdi Wang Princeton University mengdiw@princeton.edu Tuo Zhao Georgia Tech tourzhao@gatech.edu Yu-Xiang Wang UC San Diego yuxiangw@ucsd.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No Our experiments use simulated data and do not require any datasets. The experiment implementation is simple and clearly described in Appendix B, so we feel there is no need to publish data and code.
Open Datasets No We focus on nonparametric regression problems for simplicity and consider the following function f0 : RD R: f0(x) = f0(Ux) = f0( x)... for a bag of t1, ..., tn [0, 1], we can generate a labeled dataset by yi = g0(ti) + N(0, 1).
Dataset Splits No The paper describes the hyperparameters and models used in the numerical simulation but does not provide explicit details about dataset splits (training, validation, test percentages or counts). It mentions generating a 'labeled dataset' but not how it was partitioned for different stages.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used to run the experiments. The NeurIPS checklist states: 'Our experiments do not require large compute resources and can be reproduced on laptops.'
Software Dependencies No The paper mentions using 'off-the-shelf methods' and 'tools provided from the package, e.g., GP' for baseline comparisons, but it does not specify the names of these software packages or their version numbers.
Experiment Setup Yes Hyperparameter choices. In all the experiments the following architecture was used for PNN: w = 6, L = 10, M = 4, batch_size = 128, learning_rate = 1e-3. In all the experiments the following architecture was used for Conv Res Ne Xt: w = 8, L = 6, K = 6, M = 2, N = 2. Batch_size and learning_rate were adjusted for each task.