Learning Future Classifiers without Additional Data

Authors: Atsutoshi Kumagai, Tomoharu Iwata

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of the proposed method is demonstrated with experiments using synthetic and real-world data sets. We conducted experiments using two synthetic and two real-world data sets to confirm the effectiveness of the proposed method.
Researcher Affiliation Industry Atsutoshi Kumagai NTT Secure Platform Laboratories, NTT Corporation 3-9-11, Midori-cho, Musashino-shi, Tokyo, Japan kumagai.atsutoshi@lab.ntt.co.jp Tomoharu Iwata NTT Communication Science Laboratories, NTT Corporation 2-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, Japan iwata.tomoharu@lab.ntt.co.jp
Pseudocode No The paper describes the learning algorithms mathematically and textually but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statements about releasing source code for the described methodology or links to code repositories.
Open Datasets Yes We used two synthetic and two real-world data sets: ELEC2 and Chess.com. Both data sets are public benchmark data sets for evaluating stream data classification or concept drift. For details on these data sets, refer to the paper (Gama et al. 2014).
Dataset Splits Yes For synthetic data sets, we created ten different sets every synthetic data set. We set T = 10 and the remaining ten time units as test data. For ELEC2, we set two weeks as one time unit, and then T = 29 and the remaining ten time units as test data. For Chess.com, we set 20 games as one time unit, T = 15 and the remaining ten time units for test data. For the real-world data sets, we chose 80% of instances randomly every time unit to create different training sets (five sets) and evaluated the average AUC by using these sets.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, memory).
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, libraries, or programming languages used.
Experiment Setup Yes With the proposed method, we set the hyperparameters as ak = u = u0 = 1, bk = v = v0 = 0.1 for all data sets and fixed m as 3 for the real-world data sets from preliminary experiments. In addition, the regularization parameter for the two-step algorithm was set to the same value for online.