Transductive Bounds for the Multi-Class Majority Vote Classifier

Authors: Vasilii Feofanov, Emilie Devijver, Massih-Reza Amini3566-3573

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on different data sets show the effectiveness of our approach compared to the same algorithm where the threshold is fixed manually, to the extension of TSVM to multi-class classification and to a graph-based semi-supervised algorithm.
Researcher Affiliation Academia Vasilii Feofanov, Emilie Devijver, Massih-Reza Amini University Grenoble Alpes, Grenoble INP LIG, CNRS, Grenoble 38000, France firstname.lastname@univ-grenoble-alpes.fr
Pseudocode Yes Algorithm 1 Multi-class self-learning algorithm (MSLA)
Open Source Code Yes The code source of the algorithm can be found at https://github.com/vfeofanov/trans-bounds-maj-vote.
Open Datasets Yes Experiments are conducted on 5 publicly available data sets (Dheeru and Karra Taniskidou 2017; Chang and Lin 2011).
Dataset Splits Yes Each experiment is conducted 20 times, by randomly splitting the labeled and the unlabeled training sets from the original data sets by keeping fixed their respective size (l and u) at each iteration.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were provided in the paper.
Software Dependencies No The paper mentions software like 'Random Forest model' and 'scikit-learn implementation' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes In our experiments, we considered the Random Forest model with 200 trees and the maximal depth of trees (Breiman 2001), denoted by H in Algorithm 1, as the majority voted classifier with uniform posterior distribution. As the size of the labeled training examples (|ZL|) is small, we did not tune the hyperparameters of the classifier and left them by their default values.