Fast Factorization-free Kernel Learning for Unlabeled Chunk Data Streams
Authors: Yi Wang, Nan Xue, Xin Fan, Jiebo Luo, Risheng Liu, Bin Chen, Haojie Li, Zhongxuan Luo
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Both theoretical analysis and experimental validation on real-world datasets demonstrate that the proposed methods learn chunk data streams with significantly lower computational costs and comparable or superior accuracy than the state of the art. In this section, we evaluate the performances of the proposed methods on four publicly-available datasets: AR [Kim et al., 2011], AWA [Lampert et al., 2014], Caltech256 [Griffin et al., 2007] and MNIST-Fashion (MINST-F for short) [Xiao et al., 2017]. |
| Researcher Affiliation | Academia | 1 DUT-RU International School of Information Science and Engineering, Dalian University of Technology, P. R. China 2 Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, P. R. China 3 Department of Computer Science, University of Rochester, USA |
| Pseudocode | Yes | Algorithm 1 FKDA-batch method |
| Open Source Code | No | The paper does not provide any statement or link regarding the public availability of its source code. |
| Open Datasets | Yes | In this section, we evaluate the performances of the proposed methods on four publicly-available datasets: AR [Kim et al., 2011], AWA [Lampert et al., 2014], Caltech256 [Griffin et al., 2007] and MNIST-Fashion (MINST-F for short) [Xiao et al., 2017]. |
| Dataset Splits | Yes | Table 4 tabulates the results for FKDA and KNDA for detecting five unlabeled chunks with 20-fold cross-validation. |
| Hardware Specification | Yes | All methods are implemented in MATLAB and ran on an Intel (R) Core (TM) i7 PC with 3.40 GHz CPU and 8 GB RAM. |
| Software Dependencies | No | The paper mentions 'implemented in MATLAB' but does not provide specific version numbers for MATLAB or any other software dependencies. |
| Experiment Setup | Yes | For AWA and Caltech256, we take the outputs from the 7-th full connected layer of very deep 19-layer CNN as features (4096 dims). The kernel function: exp( x y 2/σ) is used for all kernel DAs. Experiments show that choosing (σ = d) for FKDA(IFKDA) and AKDA/QR produce good overall results, we thus use this value for them in all the experiments. |