Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Reverse Iterative Volume Sampling for Linear Regression
Authors: Michał Dereziński, Manfred K. Warmuth
JMLR 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we provide experimental evidence which confirms our theoretical findings. |
| Researcher Affiliation | Academia | Department of Computer Science University of California Santa Cruz |
| Pseudocode | Yes | Algorithm 2 Reg Vol(X, s, λ) and Algorithm 3 Fast Reg Vol(X, s, λ) |
| Open Source Code | No | The paper does not provide explicit statements or links indicating that the source code for the methodology described is publicly available. |
| Open Datasets | Yes | The experiments were performed on several benchmark linear regression datasets from the libsvm repository (Chang and Lin, 2011). |
| Dataset Splits | No | The paper states, 'evaluating the subsampled ridge estimator wλ(S) using the average loss over the full dataset, i.e., Average Loss: 1 n Xw λ(S) y 2', which indicates evaluation on the full dataset rather than a traditional train/test/validation split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software libraries or dependencies used in the experiments. |
| Experiment Setup | No | The paper states that 'Figure 7 shows the results only with one value of λ for each dataset, chosen so that the subsampled ridge estimator performed best (on average over all samples of preselected size s)', but it does not specify the chosen lambda values or other hyperparameters such as learning rate, batch size, or optimizer settings. |