Maximum-Entropy Fine Grained Classification
Authors: Abhimanyu Dubey, Otkrist Gupta, Ramesh Raskar, Nikhil Naik
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide a theoretical as well as empirical justification of our approach, and achieve state-of-the-art performance across a variety of classification tasks in FGVC |
| Researcher Affiliation | Academia | Abhimanyu Dubey Otkrist Gupta Ramesh Raskar Nikhil Naik Massachusetts Institute of Technology Cambridge, MA, USA {dubeya, otkrist, raskar, naik}@mit.edu |
| Pseudocode | No | The paper describes the objective function and theoretical formulations, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code for their methodology or provide links to a code repository. |
| Open Datasets | Yes | We obtain state-of-the-art results on all five datasets (Table 1-(A-E))... CUB-200-2011 [44], Cars [22], Aircrafts [28], NABirds [43], Stanford Dogs [19]... We evaluate Maximum-Entropy on the CIFAR-10 and CIFAR-100 datasets [23]... Image Net [7] |
| Dataset Splits | No | The paper mentions 'validation sets' (e.g., 'Image Net validation set') and '5-way cross-validation' for one specific experiment, but does not provide explicit overall train/validation/test split percentages or sample counts for the main FGVC datasets. |
| Hardware Specification | Yes | We perform all experiments using the Py Torch [32] framework over a cluster of NVIDIA Titan X GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch [32]' as the framework used but does not provide a specific version number for it or any other software dependencies. |
| Experiment Setup | No | The paper discusses the robustness to the choice of hyperparameter γ and label noise, but it does not provide specific details such as learning rate, batch size, number of epochs, or optimizer settings for the experimental setup. |