Multi-Label Learning with Stronger Consistency Guarantees
Authors: Anqi Mao, Mehryar Mohri, Yutao Zhong
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | While empirical validation is left for future work, our theoretical results demonstrate the potential of these new surrogate losses to advance multi-label learning. |
| Researcher Affiliation | Collaboration | Anqi Mao Courant Institute New York, NY 10012 aqmao@cims.nyu.edu Mehryar Mohri Google Research & CIMS New York, NY 10011 mohri@google.com Yutao Zhong Courant Institute New York, NY 10012 yutao@cims.nyu.edu |
| Pseudocode | No | The paper describes algorithms for gradient computation but does not present them in a pseudocode block or a formally labeled algorithm section. |
| Open Source Code | No | The paper states: 'While empirical validation is left for future work...' and in the NeurIPS checklist, 'The paper does not include experiments requiring code.' |
| Open Datasets | No | The paper states: 'While empirical validation is left for future work...'. It does not describe the use of any dataset for training. |
| Dataset Splits | No | The paper does not include experiments, thus no dataset splits for validation are specified. |
| Hardware Specification | No | The paper does not include experiments, and therefore no hardware specifications are mentioned. |
| Software Dependencies | No | The paper does not include experiments, and therefore no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper does not include experiments, and therefore no experimental setup details such as hyperparameters or training settings are provided. |