Kernelized Online Imbalanced Learning with Fixed Budgets

Authors: Junjie Hu, Haiqin Yang, Irwin King, Michael Lyu, Anthony Man-Cho So

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present both theoretical analysis and extensive experimental comparison to demonstrate the effectiveness of our proposed KOIL. In this section, we present extensive experimental results on real-world datasets to demonstrate the effectiveness of our proposed KOIL.
Researcher Affiliation Academia 1Shenzhen Key Laboratory of Rich Media Big Data Analytics and Applications Shenzhen Research Institute, The Chinese University of Hong Kong 2Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong 3Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong
Pseudocode Yes Algorithm 1 Kernelized Online Imbalanced Learning (KOIL) with Fixed Budgets
Open Source Code Yes Demo codes in both C++ and Matlab can be downloaded in https://www.dropbox.com/sh/nuepinmqzepx54r/ AAAKu L4NSZe0IRp Gu NIsux Qxa?dl=0.
Open Datasets Yes We conduct experiments on 14 benchmark datasets obtained from the UCI and the LIBSVM websites.
Dataset Splits Yes We set the learning rate to a small constant η = 0.01 and apply a 5-fold cross validation to find the penalty cost C 2[ 10:10]. For kernel-based methods, we use the Gaussian kernel and tune its parameter σ 2[ 10:10] by a 5-fold cross validation. For each dataset, we conduct 5-fold cross validation for all the algorithms, where four folds of the data are used for training while the rest for test.
Hardware Specification No No specific hardware details (e.g., CPU/GPU models, memory) used for running experiments were provided in the paper.
Software Dependencies No The paper mentions demo codes in 'C++ and Matlab' but does not specify particular software dependencies with version numbers used for the experiments (e.g., specific libraries, frameworks, or solvers with their versions).
Experiment Setup Yes We set the learning rate to a small constant η = 0.01 and apply a 5-fold cross validation to find the penalty cost C 2[ 10:10]. For kernel-based methods, we use the Gaussian kernel and tune its parameter σ 2[ 10:10] by a 5-fold cross validation. For NORMA, we apply a 5-fold cross validation to select λ and ν 2[ 10:10]. For Projectron, we apply a similar 5-fold cross validation to select the parameter of projection difference η 2[ 10:10]. We set the buffer size to 100 for each class for all related algorithms, including OAMseq, RBP, and Forgetron.