NeuralEF: Deconstructing Kernels by Deep Neural Networks

Authors: Zhijie Deng, Jiaxin Shi, Jun Zhu

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate Neural EF in a variety of scenarios. We first evidence that Neural EF can perform as well as the Nystr om method and beat Sp IN when handling classic kernels, yet with more sustainable resource consumption (see Figure 2).
Researcher Affiliation Collaboration 1Dept. of Comp. Sci. & Tech., BNRist Center, Tsinghua Bosch Joint Center for ML, Tsinghua University; Peng Cheng Laboratory 2Qingyuan Research, Shanghai Jiao Tong University 3Microsoft Research New England.
Pseudocode Yes Algorithm 1 Find the top-k eigenpairs of a kernel by Neural EF
Open Source Code Yes Code is available at https://github.com/ thudzj/neuraleigenfunction.
Open Datasets Yes We then consider using the NN-GP kernels specified by convolutional neural networks (CNNs) (dubbed as CNN-GP kernels) to process MNIST images. and We next experiment on the empirical NTKs corresponding to practically sized NNs. Without loss of generality, we train a Res Net-20 classifier (He et al., 2016) to distinguish the airplane images from the automobile ones from CIFAR-10 (Krizhevsky et al., 2009)
Dataset Splits No The paper mentions 'training data' and 'test data' extensively but does not explicitly describe a validation set split or its use for hyperparameter tuning.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU/CPU models, processor types, or memory specifications.
Software Dependencies No The paper mentions software components like 'Adam optimizer', 'scipy.linalg.eigh API', and 'laplace library', but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We set batch size B as 256 and optimize with an Adam (Kingma & Ba, 2015) optimizer with 10 3 learning rate unless otherwise stated. and We train the CIFAR-10 classifiers with Res Net architectures for totally 150 epochs under MAP principle. The optimization settings are identical to the above ones. In particular, the weight decay is 10 4, thus we can estimate the prior variance σ2 0 = 1 50000 10 4 = 0.2 where 50000 is the number of training data N. After classifier training, we fuse the BN layers into the convolutional layers to get a compact model.