Regret Bounds for Non-decomposable Metrics with Missing Labels

Authors: Nagarajan Natarajan, Prateek Jain

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We corroborate our theoretical findings with experimental evaluation on several real-world multi-label datasets, demonstrating the efficacy of our proposed framework for handling missing labels.
Researcher Affiliation Academia Weiwei Liu, Ivor Tsang, University of New South Wales, Australia, University of Technology Sydney, Australia
Pseudocode Yes Algorithm 1: Stochastic Online Learning for Missing Label Problems
Open Source Code No The paper does not provide an explicit statement about the release of open-source code or a link to a code repository.
Open Datasets Yes We conduct experiments on several real-world multi-label datasets including Bibtex, Delicious, EUR-Lex, RCV1-v2, Wiki, and LSHTC-Large. All datasets are publicly available from [37, 24, 23].
Dataset Splits No For each dataset, we randomly split the data into 80% for training and 20% for testing. There is no explicit mention of a validation set or its specific split details.
Hardware Specification Yes All experiments are performed on a single machine with 64 Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz and 1TB memory.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes We choose the learning rate η by tuning hyperparameters using cross-validation on the training set. The regularization parameter λ is set to 1/n.