Optimal Classification with Multivariate Losses

Authors: Nagarajan Natarajan, Oluwasanmi Koyejo, Pradeep Ravikumar, Inderjit Dhillon

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide empirical results on benchmark datasets, comparing the proposed algorithm to state-of-the-art methods for optimizing multivariate losses. We present two sets of experiments. The first is an experimental validation on synthetic data with known ground truth probabilities. The results serve to verify our main result (Theorem 1) for some of the losses in Table 1. The second set is an experimental evaluation of the proposed algorithm for computing optimal prediction on benchmark datasets, with comparisons to baseline and state-of-the-art algorithms for classification with general losses.
Researcher Affiliation Collaboration Nagarajan Natarajan T-NANATA@MICROSOFT.COM Microsoft Research, INDIA Oluwasanmi Koyejo SANMI@ILLINOIS.EDU Stanford University, CA & University of Illinois at Urbana-Champaign, IL, USA Pradeep Ravikumar PRADEEPR@CS.UTEXAS.EDU Inderjit S. Dhillon INDERJIT@CS.UTEXAS.EDU The University of Texas at Austin, TX, USA
Pseudocode Yes Algorithm 1 Computing s for TP Monotonic L
Open Source Code No The paper does not contain any explicit statement about releasing source code or links to a code repository for the described methodology.
Open Datasets Yes We report results on seven benchmark datasets (used in (Koyejo et al., 2014; Ye et al., 2012)). (1) REUTERS, (2) LETTERS dataset, (3) SCENE (a UCI benchmark dataset), (4) WEBPAGE binary dataset, (5) IMAGE, (6) BREAST CANCER, and (7) SPAMBASE. See (Koyejo et al., 2014; Ye et al., 2012) for more details on the datasets.
Dataset Splits No The paper specifies training and test splits for datasets (e.g., 16000 training and 4000 test instances for LETTERS dataset), but does not explicitly define a separate validation set split for the overall experimental setup. For one baseline, it mentions splitting training data for tuning parameters, but this is not a general validation split.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions some software tools used (e.g., "MATLAB wrapper provided by Vedaldi (2011)", "svm-struct implementation of (Joachims, 2005)") but does not provide specific version numbers for these or other general software dependencies (e.g., Python, PyTorch, TensorFlow).
Experiment Setup No The paper provides very limited details regarding the experimental setup. It mentions using logistic loss with L2 regularization and discusses structured SVMs and a plugin-estimator with threshold selection, but lacks specific hyperparameter values (e.g., learning rate, batch size, regularization strength) or detailed training configurations.