Consistent Multitask Learning with Nonlinear Output Relations

Authors: Carlo Ciliberto, Alessandro Rudi, Lorenzo Rosasco, Massimiliano Pontil

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Encouraging experimental results show the benefits of the proposed method in practice.
Researcher Affiliation Academia 1Department of Computer Science, University College London, London, UK. 2INRIA Sierra Project-team and École Normale Supérieure, Paris, France. 3Massachusetts Institute of Technology, Cambridge, USA. 4Università degli studi di Genova, Genova, Italy. 5Istituto Italiano di Tecnologia, Genova, Italy.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes Sarcos Dataset. We report experiments on the Sarcos dataset [22]. We performed experiments on Movielens100k [40] (movies = documents, users = queries)
Dataset Splits Yes We used a Gaussian kernel on the input and chose the corresponding bandwidth and the regularization parameter λ by hold-out crossvalidation on 30% of the training set (see details in the appendix). We used the 10 dataset splits available online for the dataset in [13], each containing 2000 examples per task with 15 examples used for training/validation while the rest is used to measure errors in terms of the explained variance. We used the (linear) input kernel and the train, validation and test splits adopted in [21] to perform 10 independent trials with 5-fold cross-validation for model selection.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper mentions that hyperparameters like bandwidth and regularization parameter were chosen by cross-validation but does not state their specific values or other concrete training configurations.