Learning Latest Classifiers without Additional Labeled Data

Authors: Atsutoshi Kumagai, Tomoharu Iwata

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of the proposed method is demonstrated with experiments using synthetic and real-world data sets.
Researcher Affiliation Industry Atsutoshi Kumagai NTT Secure Platform Laboratories kumagai.atsutoshi@lab.ntt.co.jp Tomoharu Iwata NTT Communication Science Laboratories iwata.tomoharu@lab.ntt.co.jp
Pseudocode No The paper does not include pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper does not include any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes SPAM is a collection of spam and legitimate email received by one from February 1st of 2003 to January 31th of 2004 [Gama et al., 2014]. URL is a public data set of malicious and normal urls collected over 120 days [Ma et al., 2009].
Dataset Splits Yes In our experiments, we chose the optimal hyper parameters for these methods from the following variations by using validation data. For SPAM, roughly, samples collected in the n-th month were used for labeled data, samples in the n+1-th month for unlabeled data, and samples in the n + 2-th month for test data, where n is 2, 3, , 11.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions types of models used like 'logistic regression' but does not specify software names with version numbers for dependencies (e.g., Python version, library versions).
Experiment Setup Yes In our experiments, we chose the optimal hyper parameters for these methods from the following variations by using validation data: regularization parameter for classifiers c {10 1, 1, 101} in all methods, regularization parameter for importance ρ {10 1, 1, 101} in the proposed method and IWLR, regularization parameter for imputation r, b {10 1, 1, 101}, a {1}, and imputation parameter K {1, 3, 6, 9} in the proposed method and ILR.