Learning Non-Linear Dynamics of Decision Boundaries for Maintaining Classification Performance

Authors: Atsutoshi Kumagai, Tomoharu Iwata

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of the proposed method was demonstrated through experiments using synthetic and real-world data sets. We conducted experiments using two synthetic and four real-world data sets to confirm the effectiveness of the proposed method.
Researcher Affiliation Industry Atsutoshi Kumagai NTT Secure Platform Laboratories, NTT Corporation 3-9-11, Midori-cho, Musashino-shi, Tokyo, Japan kumagai.atsutoshi@lab.ntt.co.jp Tomoharu Iwata NTT Communication Science Laboratories, NTT Corporation 2-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, Japan iwata.tomoharu@lab.ntt.co.jp
Pseudocode No The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not include any statement about making its source code available or provide a link to a code repository.
Open Datasets Yes We used four real-world data sets: SPAM21, ELEC22, ONP3, and BLOG 4. 1http://www.comp.dit.ie/sjdelany/Dataset.htm 2http://www.inescporto.pt/ jgama/ales/ales 5.html 3https://archive.ics.uci.edu/ml/datasets/Online+News+Popularity 4https://archive.ics.uci.edu/ml/datasets/Blog Feedback
Dataset Splits No The paper specifies how training and testing data were split (e.g., 'remaining ten time units as test data', '80% of samples randomly at every training time unit to create ten different training data'), but it does not explicitly mention a separate validation set or split for hyperparameter tuning or model selection.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper does not list any specific software dependencies or their version numbers (e.g., programming languages, libraries, frameworks).
Experiment Setup Yes For the proposed method and AAAI16, the number of iterations for learning was 2000 in all experiments. For Batch, Online and Present, we chose the regularization parameter from {10 1, 1, 101} in terms of which average AUC over all test time units was the best.