Multilabel reductions: what is my loss optimising?

Authors: Aditya K. Menon, Ankit Singh Rawat, Sashank Reddi, Sanjiv Kumar

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate our results, demonstrating scenarios where normalised reductions yield recall gains over unnormalised counterparts. and 6 Experimental validation We now present empirical results validating the preceding theory.
Researcher Affiliation Industry Aditya Krishna Menon, Ankit Singh Rawat, Sashank J. Reddi, and Sanjiv Kumar Google Research New York, NY 10011 {adityakmenon, sashank, ankitsrawat, sanjivk}@google.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about open-source code availability or links to a code repository for the described methodology.
Open Datasets No The authors state they 'construct a synthetic dataset' and 'generate a training sample of 10^4 (instance, label) pairs,' but do not provide concrete access information (link, DOI, citation) for a publicly available version of this dataset or the code to generate it.
Dataset Splits No The paper states, 'we generate a training sample of 10^4 (instance, label) pairs' and 'compute their precision and recall on a test sample of 10^3 (instance, label) pairs,' but does not mention a validation set or explicit split percentages for all splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using a 'linear model' and 'softmax cross-entropy loss' but does not provide specific software library names with version numbers or other ancillary software details needed to replicate the experiment.
Experiment Setup No The paper states 'We use a linear model for our scorer f, and the softmax cross-entropy loss for ℓMC,' but it does not provide specific experimental setup details such as hyperparameter values, training configurations, or system-level settings.