Rapid Performance Gain through Active Model Reuse

Authors: Feng Shi, Yu-Feng Li

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results validate the effectiveness of Ac MR. In this section, we first give the experimental setup and then show the evaluation of our proposal compared to several state-of-the-art algorithms on a number of real-world tasks.
Researcher Affiliation Academia Feng Shi and Yu-Feng Li National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China {shif, liyf}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 The learning algorithm for ACMR
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes The text classification task is collected from 20 Newsgroups1. 1https://www.cse.ust.hk/TL/ and The last task is a spam detection problem, and we use the dataset obtained from ECML PAKDD Discovery challenge2 to verify whether our method can help improve the performance. 2http://ecmlpkdd2006.org/challenge.html
Dataset Splits No For each task, we randomly divide the data into two parts: 75% as the unlabeled pool, and the rest 25% as the test set. (Does not mention a distinct validation set)
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper mentions 'hinge loss and logistic regression' and refers to 'λ > 0' in the algorithm, but does not provide specific numerical values for hyperparameters like learning rate, batch size, or other detailed training configurations.