Efficient Output Kernel Learning for Multiple Tasks

Authors: Pratik Kumar Jawanpuria, Maksim Lapin, Matthias Hein, Bernt Schiele

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on several multi-task and multi-class data sets illustrate the efficacy of our approach in terms of computational efficiency as well as generalization performance. In this section, we present our results on benchmark data sets comparing our algorithm with existing approaches in terms of generalization accuracy as well as computational efficiency.
Researcher Affiliation Academia 1Saarland University, Saarbr ucken, Germany 2Max Planck Institute for Informatics, Saarbr ucken, Germany
Pseudocode Yes Algorithm 1 Fast MTL-SDCA
Open Source Code No The paper does not provide a direct link to the source code for the described methodology or explicitly state its public release in a repository. It only refers to supplementary material for algorithm details.
Open Datasets Yes We begin with the generalization results in multi-task setups. The data sets are as follows: a) Sarcos: a regression data set... b) Parkinson: a regression data set... c) Yale: a face recognition... d) Landmine: a data set... e) MHC-I: a bioinformatics data set... f) Letter: a handwritten letters data set... USPS & MNIST Experiments: We followed the experimental protocol detailed in [10]. MIT Indoor67 Experiments: We report results on the MIT Indoor67 benchmark [26]... SUN397 [28] is a challenging scene classification benchmark [26]...
Dataset Splits Yes Table 2: Mean generalization performance and the standard deviation over ten train-test splits. Table 3: Mean accuracy and the standard deviation over five train-test splits. We use the train/test split (80/20 images per class) provided by the authors.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper mentions specific algorithms and techniques used (e.g., 'stochastic dual coordinate ascent algorithm (SDCA)', 'hinge and ϵ-SVR loss functions', 'CNN features'), but it does not specify any software names with version numbers (e.g., 'Python 3.x', 'PyTorch 1.x', 'scikit-learn 0.x') that are ancillary software dependencies.
Experiment Setup No The paper mentions the use of 'Hinge and ϵ-SVR loss functions' and that 'hyper-parameter values' were 'cross-validated', but it does not explicitly provide the specific values for these hyperparameters (e.g., learning rate, batch size, regularization parameters for its own model) or other detailed training configurations in the main text.