Virtual Class Enhanced Discriminative Embedding Learning
Authors: Binghui Chen, Weihong Deng, Haifeng Shen
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This paper empirically and experimentally demonstrates the superiority of Virtual Softmax, improving the performances on a variety of object classification and face verification tasks. |
| Researcher Affiliation | Collaboration | 1Beijing University of Posts and Telecommunications 2AI Labs, Didi Chuxing, Beijing 100193, China |
| Pseudocode | No | The paper includes mathematical formulations and derivations, but it does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that its source code is publicly available. |
| Open Datasets | Yes | Extensive experiments have been conducted on several datasets, including MNIST [17], SVHN [23], CIFAR10/100 [16], CUB200 [35], Image Net32[5], LFW [12] and SLLFW [6]. |
| Dataset Splits | No | The paper mentions 'training' and 'testing' but does not explicitly provide details about specific training/validation/test dataset splits, such as percentages or sample counts for each partition. |
| Hardware Specification | Yes | The models are trained on one Titan X and we fill it with different batch sizes for different networks. |
| Software Dependencies | No | The paper states 'All of our experiments are implemented by Caffe[14].' While it names the software, it does not specify a version number, nor does it list any other software components with their respective versions. |
| Experiment Setup | Yes | For training, the initial learning rate is 0.1, and is divided by 10 at (20k, 27k) and (12k, 18k) in CIFAR100 and the other datasets respectively, and the corresponding total iterations are 30k and 20k. |