A Structured Prediction Approach for Label Ranking
Authors: Anna Korba, Alexandre Garcia, Florence d'Alché-Buc
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we provide empirical results on synthetic and real-world datasets showing the relevance of our method. |
| Researcher Affiliation | Academia | Anna Korba, Alexandre Garcia, Florence d Alché-Buc LTCI, Télécom Paris Tech Université Paris-Saclay Paris, France firstname.lastname@telecom-paristech.fr |
| Pseudocode | No | The paper describes algorithmic steps in prose and mathematical formulations but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code to reproduce our results is available: https://github.com/akorba/Structured_ Approach_Label_Ranking/ |
| Open Datasets | Yes | Finally we evaluate the performance of our approach on standard benchmarks. We present the results obtained with two regressors : Kernel Ridge regression (Ridge) and k-Nearest Neighbors (k NN). Table 2: Mean Kendall s τ coefficient on benchmark datasets authorship glass iris vehicle vowel wine |
| Dataset Splits | Yes | We adopt the same setting as Cheng et al. (2010) and report the results of our predictors in terms of mean Kendall s τ: kτ = C D K(K 1)/2 C : number of concordant pairs between 2 rankings D : number of discordant pairs between 2 rankings , (21) from five repetitions of a ten-fold cross-validation (c.v.). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like Kernel Ridge regression, k-Nearest Neighbors, Hungarian algorithm, and ILP solvers, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | No | The paper states that 'The parameters of our regressors were tuned in a five folds inner c.v. for each training set. We report our parameter grids in the supplementary materials.' This indicates that the specific setup details are not present in the main text. |