Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
A Worst Case Analysis of Calibrated Label Ranking Multi-label Classification Method
Authors: Lucas Henrique Sousa Mello, Flávio Miguel Varejão, Alexandre Loureiros Rodrigues
JMLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, mathematical proofs are given for the multilabel method ranking by pairwise comparison and its extension for classification named by calibrated label ranking, showing their performance on a worst case scenario for five multilabel metrics. The objective of this section is to present interesting theoretical properties of CLR that show scenarios where CLR should not be used. |
| Researcher Affiliation | Academia | Lucas H. S. Mello EMAIL Department of Informatics, Federal University of Esp ırito Santo, Vit oria, Brazil Fl avio M. Varej ao EMAIL Department of Informatics, Federal University of Esp ırito Santo, Vit oria, Brazil Alexandre L. Rodrigues EMAIL Department of Statistics, Federal University of Esp ırito Santo, Vit oria, Brazil |
| Pseudocode | Yes | Algorithm 1: Algorithm for training RPC. Data: Training data set of m samples D = {(x1, y1), , (xm, ym)} Result: Trained binary classifiers cij for 1 i n, 1 j n and i = j. and Algorithm 2: Scoring a single label i in RPC. Input: Trained binary classifiers cij for all j = i. Result: Score s N |
| Open Source Code | No | The paper primarily presents mathematical proofs and theoretical analysis of existing methods (RPC and CLR) rather than introducing a new methodology with associated code. No statement about code release or repository link is provided for the work described. |
| Open Datasets | No | Most multi-label classification methods are evaluated on real datasets, which is a good practice for comparing the performance among methods on the average scenario. Due to the large amount of factors to consider, this empirical approach does not explain, nor does show the factors impacting the performance. The paper itself does not use specific datasets; it works with 'arbitrary distributions P' or 'special distributions'. |
| Dataset Splits | No | The paper is theoretical, providing mathematical proofs and analysis of multi-label classification methods under worst-case scenarios, and does not conduct experiments on specific datasets. Therefore, no dataset split information is applicable or provided. |
| Hardware Specification | No | The paper focuses on theoretical analysis and mathematical proofs, and does not report on experimental results that would require specific hardware. Therefore, no hardware specifications are provided. |
| Software Dependencies | No | The paper presents a theoretical analysis and mathematical proofs. It does not describe any software implementation details or specific software dependencies with version numbers for its own contributions. |
| Experiment Setup | No | The paper is entirely theoretical, focusing on mathematical proofs and worst-case analysis. It does not describe any experimental setup, hyperparameter values, or training configurations. |