Approximation and Learning with Deep Convolutional Models: a Kernel Perspective

Authors: Alberto Bietti

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental These achieve good empirical performance on standard vision datasets, while providing a precise description of their functional space that yields new insights on their inductive bias. and Table 1: Cifar10 test accuracy with 2-layer convolutional kernels with 3x3 patches and pooling/downsampling sizes [2,5], with different choices of patch kernels κ1 and κ2.
Researcher Affiliation Academia Alberto Bietti Center for Data Science, New York University alberto.bietti@nyu.edu
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Code is available at https://github.com/albietz/ckn_kernel.
Open Datasets Yes We consider classification on Cifar10 dataset, which consists of 50k training images and 10k test images with 10 different output categories.
Dataset Splits No The paper states 'Cifar10 dataset, which consists of 50k training images and 10k test images', but does not explicitly mention a separate validation split or its size.
Hardware Specification Yes The computation of kernel matrices is distributed on up to 1000 cores on a cluster consisting of Intel Xeon processors.
Software Dependencies No The paper mentions 'C++', 'Eigen library', and 'Py Torch implementation' but does not provide specific version numbers for any of these software components.
Experiment Setup Yes We report the test accuracy for a fixed regularization parameter λ = 10 8 (we note that the performance typically remains the same for smaller values of λ).