Generalization Error Analysis of Quantized Compressive Learning

Authors: Xiaoyun Li, Ping Li

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical study is also conducted to validate our theoretical findings. Numerical study In this section, we validate the theoretical findings through experiments on real-world datasets from UCI repository [12]. Table 1 provides summary statistics, where mean ρ is the average pair-wise cosine of all pairs of samples. Mean 1-NN ρ is the average cosine of each point to its nearest neighbor. Figures 2, 4, 5.
Researcher Affiliation Collaboration Xiaoyun Li Department of Statistics Rutgers University Piscataway, NJ 08854, USA xiaoyun.li@rutgers.edu Ping Li Cognitive Computing Lab Baidu Research Bellevue, WA 98004, USA liping11@baidu.com
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper mentions external tools like LIBSVM [5] but does not provide any statement or link for the source code of its own methodology or experimental setup.
Open Datasets Yes In this section, we validate the theoretical findings through experiments on real-world datasets from UCI repository [12].
Dataset Splits No The paper states: "We randomly split the data to 60% for training and 40% for testing", which implies a train/test split but does not explicitly mention a separate validation set or specific validation methodology like k-fold cross-validation or specific split percentages for validation.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for the experiments.
Software Dependencies No The paper mentions using "a linear SVM solver [5]" and cites LIBSVM, but it does not specify the version number of LIBSVM or any other software dependencies with their versions.
Experiment Setup Yes For 1-NN classification, we take each data point as test sample and the rest as training data over all the examples, and report the mean test accuracy. For linear classifier, we feed the inner product estimation matrix XQXT Q as the kernel matrix into a linear SVM solver [5]. We randomly split the data to 60% for training and 40% for testing, and the best test accuracy among all hyper-parameter C is reported, averaged over 5 repetition s.