Learning with Incremental Iterative Regularization

Authors: Lorenzo Rosasco, Silvia Villa

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments
Researcher Affiliation Academia Lorenzo Rosasco DIBRIS, Univ. Genova, ITALY; LCSL, IIT & MIT, USA; Silvia Villa LCSL, IIT & MIT, USA
Pseudocode No The paper describes the algorithm using mathematical equations (7) and (8) but does not present it as a structured pseudocode or algorithm block.
Open Source Code No The paper does not contain any statement about releasing source code or provide links to a code repository.
Open Datasets Yes cpu Small3, Adult and Breast Cancer Wisconsin (Diagnostic)4 real-world datasets. 3Available at http://www.cs.toronto.edu/ delve/data/comp-activ/desc.html 4Adult and Breast Cancer Wisconsin (Diagnostic), UCI repository, 2013.
Dataset Splits No The paper mentions 'Validation Error' and 'Training Error' in Figure 2, but does not provide specific details on how the dataset was split into training, validation, and test sets (e.g., percentages or sample counts).
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes Let ˆw0 2 H and γ 2 R++. Consider the sequence ( ˆwt)t2N generated through the following procedure:... and The input points (xi)1 i n are uniformly distributed in [0, 1] and the output points are obtained as yi = hw , Φ(xi)i + Ni, where Ni is a Gaussian noise with zero mean and standard deviation 1 and Φ = ('k)1 k d is a dictionary of functions whose k-th element is 'k(x) = cos((k 1)x)+sin((k 1)x). In Figure 1, we plot the test error for d = 5 (with n = 80 in (a) and 800 in (b)).