Online Multi-Kernel Learning with Graph-Structured Feedback

Authors: Pouya M Ghari, Yanning Shen

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical tests on real datasets demonstrate the effectiveness of the novel approach. (Abstract) ... In this section, our proposed online OMKL-GF framework is tested in terms of accuracy and computational complexity. We evaluate our proposed online OMKL-GF in comparison with the following kernel learning baselines (Section 5, Experiments)
Researcher Affiliation Academia 1University of California, Irvine, CA, USA. Correspondence to: Yanning Shen <yannings@uci.edu>.
Pseudocode Yes Algorithm 1 Generating Graph Gt ... Algorithm 2 OMKL with Graph Feedback (OMKL-GF)
Open Source Code No The paper does not mention providing open-source code or include any links to a code repository.
Open Datasets Yes The performance of MKL algorithms are tested for online regression, over the following real datasets downloaded from UCI Machine Learning Repository are used. ... Air Quality. (Vito et al., 2008). ... Istanbul Stock Exchange. (Akbilgic et al., 2014). ... Twitter. (Kawala et al., 2013). ... Tom s Hardware. (Kawala et al., 2013). ... Naval Propulsion Plants. (Coraddu et al., 2016).
Dataset Splits No The paper describes its online learning approach and how MSE is calculated over time, but it does not specify traditional train/validation/test splits (e.g., percentages or counts) for the datasets. It mentions 'MSE at time instant t' but not a formal splitting strategy for the datasets beyond that.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to conduct the experiments.
Software Dependencies No The paper does not mention any specific software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow, or specific libraries with versions).
Experiment Setup Yes The number of random features D is fxed to 50 for all random feature-based MKL algorithms. Also, the regularization coeffcient λ is set to 10^-3. In addition, for MKL algorithms we consider a dictionary of 17 radial basis function (RBF) kernels with different bandwidths. Let σi,dic denotes the variance of ith RBF kernel in the dictionary. In this case, the value of σi,dic can be expressed as σi,dic = 2^(i-9) where 1 <= i <= 17. Moreover, for all MKL algorithms, stepsize η and the exploration rate ηe are set to 1/sqrt(t).