Reliable Multilabel Classification: Prediction with Partial Abstention
Authors: Vu-Linh Nguyen, Eyke Hullermeier5264-5271
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present an empirical analysis that is meant to show the effectiveness of our approach to prediction with abstention. To this end, we perform experiments on a set of standard benchmark data sets from the MULAN repository2 (cf. Table 1), following a 10-fold cross-validation procedure. |
| Researcher Affiliation | Academia | Vu-Linh Nguyen, Eyke H ullermeier Heinz Nixdorf Institute and Department of Computer Science, Paderborn University, Germany vu.linh.nguyen@uni-paderborn.de, eyke@upb.de |
| Pseudocode | No | The main paper does not contain any structured pseudocode or algorithm blocks. It mentions that 'a concrete algorithm is given in the supplementary material' for some cases, but this is not present in the provided text. |
| Open Source Code | No | No explicit statement or link to the open-source code for the methodology proposed in this paper was found. The link provided (`http://scikit.ml/api/skmultilearn.html`) is for an implementation of a base learner (scikit-multilearn) used by the authors, not their own novel contribution. |
| Open Datasets | Yes | standard benchmark data sets from the MULAN repository2 (cf. Table 1) |
| Dataset Splits | Yes | following a 10-fold cross-validation procedure. |
| Hardware Specification | No | No specific hardware details (e.g., CPU, GPU models, memory) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | The paper mentions using 'logistic regression (LR) as base learner (in its default setting in sklearn, i.e., with regularisation parameter set to 1)' and refers to 'scikit.ml/api/skmultilearn.html' but does not specify exact version numbers for these software dependencies. |
| Experiment Setup | Yes | For training an MLC classifier, we use binary relevance (BR) learning with logistic regression (LR) as base learner (in its default setting in sklearn, i.e., with regularisation parameter set to 1). ... We conduct a first series of experiments (SEP) with linear penalty f1(a) = a c, where c [0.05, 0.5], and a second series (PAR) with concave penalty f2(a) = (a m c)/(m + a), varying c [0.1, 1]. |