Learning with convolution and pooling operations in kernel methods
Authors: Theodor Misiakiewicz, Song Mei
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform a simple numerical experiment on simulated data. We take x Unif(Qd) with d = 30, and consider two target functions: f LF,3(x) = 1/d P i [d] xixi+1xi+2 , f HF,3(x) = 1/d P i [d] ( 1)i xixi+1xi+2 . In Figure 1, we report the test errors of fitting f LF,3 (left) and f HF,3 (right) using kernel ridge regression with these 5 kernels. |
| Researcher Affiliation | Academia | Theodor Misiakiewicz Department of Statistics Stanford University Stanford, CA 94305 misiakie@stanford.edu Song Mei Department of Statistics University of California Berkeley Berkeley, CA 94720 songmei@berkeley.edu |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. It is a theoretical paper focused on mathematical characterizations. |
| Open Source Code | No | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A] Our paper is theoretical and we only provide small numerical illustrations (which are fully mathematically explicit and only require matrix inversions). |
| Open Datasets | No | The paper states 'We perform a simple numerical experiment on simulated data. We take x Unif(Qd) with d = 30, and consider two target functions'. This data is self-generated based on a uniform distribution, not a publicly available dataset with a specific access link or citation. |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test dataset splits. It mentions 'n i.i.d. samples' for training and evaluates 'test error' but doesn't detail how the data was partitioned into these sets (e.g., percentages, counts, or a standard split method). |
| Hardware Specification | No | The paper does not specify any hardware details like GPU models, CPU types, or memory. In the checklist, it states: 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A]' |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). The numerical simulations are described as 'fully mathematically explicit and only require matrix inversions,' which implies standard mathematical software without needing specific versioning details to reproduce. |
| Experiment Setup | Yes | We choose a small regularization parameter λ = 10 6, and the noise level σε = 0. The curves are averaged over 5 independent instances and the error bar stands for the standard deviation of these instances. |