Consistent Multiclass Algorithms for Complex Performance Measures

Authors: Harikrishna Narasimhan, Harish Ramaswamy, Aadirupa Saha, Shivani Agarwal

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In a first set of experiments, we tested the consistency behavior of the algorithms on a synthetic data set for which the Bayes optimal performance could be calculated. Specifically, we used a 3-class synthetic data set... Figure 4 shows the performance of the different algorithms for the G-mean, H-mean, Q-mean and micro F1 measures. Our next set of experiments evaluates the Frank-Wolfe based algorithm on a variety of real data sets taken from the UCI repository
Researcher Affiliation Academia Harikrishna Narasimhan HARIKRISHNA@CSA.IISC.ERNET.IN Harish G. Ramaswamy HARISH GURUP@CSA.IISC.ERNET.IN Aadirupa Saha AADIRUPA.SAHA@CSA.IISC.ERNET.IN Shivani Agarwal SHIVANI@CSA.IISC.ERNET.IN Indian Institute of Science, Bangalore 560012, INDIA
Pseudocode Yes Algorithm 1 Algorithm Based on Frank-Wolfe Method
Open Source Code No The paper does not provide any explicit statement about releasing source code for the methodology described, nor does it include a link to a code repository.
Open Datasets Yes Our next set of experiments evaluates the Frank-Wolfe based algorithm on a variety of real data sets taken from the UCI repository (Frank & Asuncion, 2010). URL http://archive.ics.uci.edu/ml. and Co RA data set, 20 Newsgroups data set, and RCV1 data set (Forman, 2003; Druck et al., 2008; Lewis et al., 2004)
Dataset Splits Yes The results, averaged over 5 random 80%-20% train-test splits for each data set, are shown in Table 3
Hardware Specification No The paper discusses training times and computational efficiency but does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions algorithms like 'multiclass logistic regression' and 'SVMperf', but it does not specify any software libraries, frameworks, or their version numbers used for implementation.
Experiment Setup No The paper mentions using linear models and regularized logistic regression, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, regularization strength) or other training configurations.