A Consistent Regularization Approach for Structured Prediction
Authors: Carlo Ciliberto, Lorenzo Rosasco, Alessandro Rudi
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results are provided to demonstrate the practical usefulness of the proposed approach. |
| Researcher Affiliation | Academia | 1 Laboratory for Computational and Statistical Learning Istituto Italiano di Tecnologia, Genova, Italy & Massachusetts Institute of Technology, Cambridge, MA 02139, USA. 2 Università degli Studi di Genova, Genova, Italy. |
| Pseudocode | No | The paper refers to 'Alg. 1' and provides the mathematical formulation for it, but it does not present it in a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide any explicit statements about the release of source code for the methodology described, nor does it include links to a code repository. |
| Open Datasets | Yes | We considered the problem of ranking movies in the Movie Lens dataset [29] (ratings (from 1 to 5) of 1682 movies by 943 users). We considered the USPS digits reconstruction experiment originally proposed in [18]. |
| Dataset Splits | Yes | We randomly sampled n = 643 users for training and tested on the remaining 300. We performed 5-fold cross-validation for model selection. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as CPU/GPU models, memory specifications, or cloud computing instance types used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Matlab FMINUNC function', but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | No | The paper describes general experimental approaches such as kernel choices and cross-validation, but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings. |