Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Incremental Randomized Sketching for Online Kernel Learning
Authors: Xiao Zhang, Shizhong Liao
ICML 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that the incremental randomized sketching achieves a better learning performance in terms of accuracy and efficiency even in adversarial environments. |
| Researcher Affiliation | Academia | 1College of Intelligence and Computing, Tianjin University, Tianjin 300350, China. |
| Pseudocode | Yes | Algorithm 1 Ske GD Algorithm |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it contain an explicit code release statement or repository link. |
| Open Datasets | Yes | We compare our Ske GD with the state-of-the-art online kernel learning algorithms on the well-known classification benchmarks4. [...] 4https://www.csie.ntu.edu.tw/~cjlin/libsvm |
| Dataset Splits | No | The paper mentions merging 'training and testing data into a single dataset' and performing '20 different random permutations', but it does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology for train/validation/test sets). |
| Hardware Specification | Yes | All experiments are performed on a machine with 4-core Intel Core i7 3.60 GHz CPU and 16GB memory. |
| Software Dependencies | Yes | The compared algorithms are obtained from the LIBOL v0.3.0 toolbox and LSOKL toolbox5. |
| Experiment Setup | Yes | The stepsizes η of all the gradient descent based algorithms are tuned in 10[ 5:+1:0], and the regularization parameters λ are tuned in 10[ 4:+1:1]. Besides, we use the Gaussian kernel κ(x, x ) = exp x x 2 2/2σ2 , where the set σ {2[ 5:+0.5:7]} are adopted as the candidate kernel set. [...] Besides, we set τ = 0.2, sp = 3B/4, sm = τsp, d = 4 and ρ = 0.3T in our Ske GD if not specially specified, and the rank k = 0.1B for NOGD and Ske GD. |