Incremental and Decremental Optimal Margin Distribution Learning

Authors: Li-Jun Chen, Teng Zhang, Xuanhua Shi, Hai Jin

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive empirical studies show that ID-ODM can achieve 9.1 speedup on average with almost no generalization lost compared to retraining ODM on new data set from scratch. Data sets. We perform experiments on nine real-world data sets available on the LIBSVM website 1. Evaluation Measures. To verify the efficiency of our proposed methods, we compare the running time of each method. Besides, we also compare the accuracy on the test set to validate the effectiveness. Results Efficiency Comparison. Figure 3 demonstrates the cumulative running time of conducting SID-ODM, BID-ODM, SR-ODM, and BR-ODM on each data set.
Researcher Affiliation Academia Li-Jun Chen , Teng Zhang , Xuanhua Shi and Hai Jin National Engineering Research Center for Big Data Technology and System Services Computing Technology and System Lab, Cluster and Grid Computing Lab School of Computer Science and Technology, Huazhong University of Science and Technology, China {ljchen, tengzhang, xhshi, hjin}@hust.edu.cn
Pseudocode Yes Algorithm 1 ID-ODM
Open Source Code No The paper does not provide an explicit statement or link to the source code for the methodology described.
Open Datasets Yes Data sets. We perform experiments on nine real-world data sets available on the LIBSVM website 1. 1https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/binary. html
Dataset Splits No The paper states 'We randomly divide all data sets into training and test sets with a ratio of 4:1' and 'we randomly select 75% of the training data as historical data and the rest are varying data'. While hyperparameters are tuned by grid search, a distinct validation set split is not explicitly described or quantified.
Hardware Specification Yes All methods are performed on a machine with the 11th Gen Intel(R) Core(TM) i5-11320H@3.20GHz CPUs and 16GB main memory.
Software Dependencies No The paper mentions 'ODM is optimized by dual coordinate gradient descent' but does not specify any software libraries, packages, or their version numbers for reproducibility.
Experiment Setup Yes Hyperparameters. The hyperparameters in our experiments contain three model hyperparameters {λ, θ, σ}. λ and θ are tuned from {10-3, . . . , 103} and {0.1, 0.2, . . . , 0.9}, respectively by grid search. The kernel hyperparameter σ is fixed to 1/d where d is the number of features.