Quantization Algorithms for Random Fourier Features

Authors: Xiaoyun Li, Ping Li

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental experiments confirm the effectiveness and efficiency of the proposed method. and 5. Experiments We conduct experiments with compressed RFFs on three popular learning tasks: kernel SVM (KSVM), kernel logistic regression (KLR) and kernel ridge regression (KRR).
Researcher Affiliation Industry Xiaoyun Li, Ping Li Cognitive Computing Lab Baidu Research 10900 NE 8th St Bellevue WA 98004 USA
Pseudocode Yes Algorithm 1. Lloyd-Max Algorithm (reproduced from Wu (1992))
Open Source Code No The paper does not provide any explicit statement or link indicating the availability of its source code.
Open Datasets Yes ASU-DB (Li et al., 2016) and LIBSVM (Chang and Lin, 2011) website are cited as sources for the datasets used.
Dataset Splits No The paper states 'We randomly split each dataset into 60% for training and 40% for testing' and 'For Cover Type, the dataset is randomly divided into training and test set with equal size', but it does not specify a validation dataset split.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments, such as exact GPU/CPU models or processor types.
Software Dependencies No The paper states 'LIBLINEAR (Chang and Lin, 2011) is used as the solver.' but does not provide a specific version number for this or any other software dependency.
Experiment Setup Yes The parameter C in SVM is fine tuned for every compression method, b and m respectively. and train logistic regression using Stochastic Gradient Descent (SGD) with cross-entropy loss and minibatch size 500. and We train the models for at least 50 epochs until the test accuracy stabilizes.