The Teaching Dimension of Regularized Kernel Learners

Authors: Hong Qian, Xu-Hui Liu, Chen-Xi Su, Aimin Zhou, Yang Yu

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experimental results of teaching the optimizationbased learners verify the theoretical findings. 5. Experiments In this section, we perform the empirical study to verify the theoretical results.
Researcher Affiliation Academia 1School of Computer Science and Technology, East China Normal University, Shanghai, China. 2Shanghai Key Laboratory of Multidimensional Information Processing. 3School of Artificial Intelligence, Nanjing University, Nanjing, China. 4National Key Laboratory for Novel Software Technology. 5Shanghai Institute of AI for Education.
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the paper.
Open Source Code Yes The code is available at https://github. com/liuxhym/STARKE.git.
Open Datasets Yes For regression, we choose two synthetic datasets: the make-regression (MR) dataset from sklearn as well as the Sin dataset, and two real-world datasets: MPG from UCI (Blake et al., 1998) and Eunite. ... For classification, we choose two synthetic datasets: the two-moon (Moon) dataset as well as the two-circles (Circle) dataset from sklearn, and two UCI binary classification datasets: Adult and Sonar.
Dataset Splits No The paper describes the datasets used and how target hypotheses are generated, but it does not explicitly specify training, validation, or test dataset splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using 'sklearn' for datasets and describes methods like 'Nyström method', but it does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes For regression, ... The regularization function Ω(x2) = x2 is applied. For classification, ... The classification threshold is set as zero in experiments. The regularization function Ω(x2) = 1/200x2 is applied.