Adaptive Sparse Confidence-Weighted Learning for Online Feature Selection
Authors: Yanbin Liu, Yan Yan, Ling Chen, Yahong Han, Yi Yang4408-4415
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 6 Experiments In this section, we evaluate the proposed ASCW algorithm on three imbalanced measures, i.e., F-measure, AUROC, and AUPRC and compare with various online learning and feature selection methods. |
| Researcher Affiliation | Academia | 1SUSTech-UTS Joint Centre of CIS, Southern University of Science and Technology 2Centre for Artificial Intelligence, University of Technology Sydney 3College of Intelligence and Computing, Tianjin University |
| Pseudocode | Yes | Algorithm 1 Imbalanced sparse CW in online-batch manner and Algorithm 2 Multiple Cost-Sensitive Learning. |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the methodology described. |
| Open Datasets | Yes | We conduct experiments on three widely-used high-dimensional benchmarks and sample with different ratios to construct nine imbalance configurations, as shown in Table 1. Datasets real-sim, rcv1, news20. |
| Dataset Splits | No | The paper mentions 'training data' and 'test performance' but does not specify explicit train/validation/test splits with percentages or sample counts. It refers to 'online-batch' processing, which is a different concept from a static dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU or GPU models, memory, or specific cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Liblinear' (for L1SVM) but does not provide specific version numbers for any software dependencies or libraries used in their implementation. |
| Experiment Setup | Yes | To explain the necessity of the online-batch update and explore proper batch size, we perform experiments on news20 with various batch sizes, as shown in Table 2. The best performance is achieved with batch size=1... We thus set batch size=256 in remaining experiments. ... we set the selected feature dimension to 50 for all algorithms except that for CSOAL we set query ratio to be 1%. |