Ahpatron: A New Budgeted Online Kernel Learning Machine with Tighter Mistake Bound

Authors: Yun Liao, Junfan Li, Shizhong Liao, Qinghua Hu, Jianwu Dang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that Ahpatron outperforms the state-of-the-art algorithms on the same or a smaller budget. Table 2: Comparison with the state-of-the-art algorithms
Researcher Affiliation Academia Yun Liao, Junfan Li, Shizhong Liao , Qinghua Hu, Jianwu Dang College of Intelligence and Computing, Tianjin University, Tianjin 300350, China {yliao,junfli,szliao,huqinghua}@tju.edu.cn, jdang@jaist.ac.jp
Pseudocode Yes Algorithm 1: AVP, Algorithm 2: Ahpatron
Open Source Code Yes Codes and datasets: https://github.com/alg4ml/Ahpatron.git
Open Datasets Yes We download six binary classification datasets from UCI machine learning repository 4 and LIBSVM website 5, as shown in Table 1. 4http://archive.ics.uci.edu/ml/datasets.php 5https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/ binary.html
Dataset Splits No The paper does not specify exact train/validation/test splits or percentages. It describes online learning where examples are processed sequentially, which typically does not involve fixed validation splits in the same manner as batch learning.
Hardware Specification Yes All algorithms are implemented in R on a Windows machine with 2.8 GHz Core(TM) i7-1165G7 CPU 6.
Software Dependencies No The paper states "All algorithms are implemented in R" but does not provide a specific version number for R or any other software dependencies with their versions.
Experiment Setup Yes For BOGD++, NOGD, and FOGD, we choose the stepsize of gradient descent from n 10[ 3:1:3] o . The other parameters of BOGD++ and NOGD follow the original paper. All parameters of POMDR also follow the original paper. For Projectron and Projectron++, there is a parameter 0 < η < 1 balancing the memory costs and prediction performance. We choose η {0.1, 0.9}. For Ahpatron, we set the parameters following Theorem 6, that is, η = 0.0005, λ = U B 2 . We choose the best ε {0.5, 0.6, 0.7, 0.8, 0.9} in hindsight, and set σ = 1 for all datasets. If the per-round running time of Projectron++ is larger than 1 hour, then we set σ = 2.