Efficient Test-Time Predictor Learning With Group-Based Budget

Authors: Li Wang, Dajiang Zhu, Yujie Chi

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various datasets demonstrate the effectiveness and efficiency of the proposed method by comparing with various baselines.
Researcher Affiliation Academia Li Wang Department of Mathematics University of Texas at Arlington li.wang@uta.edu Dajiang Zhu Department of Computer Science and Engineering University of Texas at Arlington dajiang.zhu@uta.edu Yujie Chi Physics Department University of Texas at Arlington yujie.chi@uta.edu
Pseudocode Yes Algorithm 1 Learning with test-time budget (LTB)
Open Source Code No The paper provides links to code for baseline methods (e.g., CSTC, FS) but does not provide concrete access or an explicit statement for the source code of the proposed method (LTB).
Open Datasets Yes Both slice loc and blog Data are freely available from UCI Machine Learning Repository1 and the remaining datasets are from LIBSVM datasets2.
Dataset Splits No The paper lists 'Train' and 'Test' sizes for datasets in Table 1, and mentions tuning parameter C, but does not provide specific details on how data was split into training, validation, and test sets for reproducibility (e.g., percentages, random seeds, or citations to predefined splits for hyperparameter tuning).
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as CPU/GPU models, memory, or cloud instance types.
Software Dependencies No The paper mentions using 'solvers in Liblinear (Fan et al. 2008)' but does not specify its version number or any other software dependencies with their versions.
Experiment Setup Yes In the experiments, we tune the parameter C in the range [0.01, 0.1, 1, 10, 100] and fix the parameter ϵ in SVR as 0.1.