Multiclass Learning from Contradictions
Authors: Sauptik Dhar, Vladimir Cherkassky, Mohak Shah
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate the efficacy of MU-SVM on several real world datasets achieving > 20% improvement in test accuracies compared to M-SVM. |
| Researcher Affiliation | Collaboration | Sauptik Dhar LG Electronics Santa Clara, CA 95054 sauptik.dhar@lge.com Vladimir Cherkassky University of Minnesota Minneapolis, MN 55455 cherk001@umn.edu Mohak Shah LG Electronics Santa Clara, CA 95054 mohak.shah@lge.com |
| Pseudocode | No | The paper presents mathematical formulations but does not contain structured pseudocode or algorithm blocks (e.g., a clearly labeled Algorithm X). |
| Open Source Code | No | The paper does not provide an explicit statement or link confirming the release of their specific source code for the methodology described in the paper. It mentions existing tools like MSVMpack [33] and libsvmtools [34] but not the authors' own implementation. |
| Open Datasets | Yes | We use three real life datasets discussed next: German Traffic Sign Recognition Benchmark (GTSRB) [38]: ... Handwritten characters (ABCDETC) [16]: ... Speech-based Isolated Letter recognition (ISOLET) [39]: |
| Dataset Splits | Yes | Table 1: Real-life datasets. DATASET TRAIN / TEST SIZE GTSRB 300 / 1500 (100 / 500 PER CLASS) ABCDETC 600 / 400 (150 / 100 PER CLASS) ISOLET 500 / 500 (100 / 100 PER CLASS) ... Model selection within each partition is done using stratified 5 Fold CV [35]. |
| Hardware Specification | Yes | We use a desktop with 12 core Intel Xeon @3.5 Ghz and 32 GB RAM. |
| Software Dependencies | No | The paper refers to existing software packages in its references (e.g., 'MSVMpack' [33], 'libsvmtools' [34]) but does not provide specific version numbers for any software dependencies used in their experiments. |
| Experiment Setup | Yes | For the model parameters our initial experiments showed linear parameterization to be optimal for GTSRB; hence only linear kernel has been used for it. For ABCDETC and ISOLET an RBF kernel K(xi, xj) = exp( γ xi xj 2) with γ = 2 7 provided optimal results for M-SVM. For all the experiments model selection is done over the range of parameters C = [10 4, . . . , 103] , C /C = n m L and = [0, 0.01, 0.05, 0.1]. |