Consistent Multilabel Classification

Authors: Oluwasanmi O. Koyejo, Nagarajan Natarajan, Pradeep K. Ravikumar, Inderjit S. Dhillon

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on synthetic and benchmark datasets are supportive of our theoretical findings.
Researcher Affiliation Academia Oluwasanmi Koyejo Department of Psychology, Stanford University sanmi@stanford.edu Nagarajan Natarajan Department of Computer Science, University of Texas at Austin naga86@cs.utexas.edu Pradeep Ravikumar Department of Computer Science, University of Texas at Austin pradeepr@cs.utexas.edu Inderjit S. Dhillon Department of Computer Science, University of Texas at Austin inderjit@cs.utexas.edu
Pseudocode Yes Algorithm 1: Plugin-Estimator for micro and instance
Open Source Code No The paper does not contain an explicit statement or a link to the authors' own open-source code for the described methodology.
Open Datasets Yes We use four benchmark multilabel datasets4 in our experiments: (i) SCENE, an image dataset [...] (ii) BIRDS [...] (iii) EMOTIONS [...] and (iv) CAL500 [...]. The datasets were obtained from http://mulan.sourceforge.net/datasets-mlc.html.
Dataset Splits Yes Then, the given metric micro(f) is maximized on a validation sample. [...] Algorithm 1: Plugin-Estimator for micro and instance [...] 2. Split the training data Sm into two sets Sm1 and Sm2. [...] Obtain ˆδ by solving (12) on S2 = [M m=1Sm2.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models, memory, or cloud computing specifications.
Software Dependencies No The paper mentions performing 'logistic regression (with L2 regularization)' but does not specify any software names with version numbers (e.g., Python, PyTorch, scikit-learn versions).
Experiment Setup No The paper mentions using 'logistic regression (with L2 regularization)' and tuning a threshold on a validation set. However, it does not provide specific hyperparameter values for the regularization, learning rate, batch size, or other detailed training configurations.