Test-Time Amendment with a Coarse Classifier for Fine-Grained Classification
Authors: Kanishk Jain, Shyamgopal Karthik, Vineet Gandhi
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on the i Naturalist-19 [40] and tiered Image Net-H [41] datasets. We conduct five runs for each experiment in tables 1 and 2 and report the mean and standard deviations. |
| Researcher Affiliation | Academia | Kanishk Jain1, Shyamgopal Karthik2, Vineet Gandhi1 1IIIT Hyderabad 2University of Tübingen |
| Pseudocode | No | The paper does not contain a clearly labeled "Pseudocode" or "Algorithm" block, nor does it present structured steps in a code-like format. |
| Open Source Code | Yes | The code is available at: https://github.com/kanji95/Hierarchical-Ensembles |
| Open Datasets | Yes | Like prior works [6, 8, 7], we evaluate our approach on the i Naturalist-19 [40] and tiered Image Net-H [41] datasets. |
| Dataset Splits | No | The paper mentions evaluation metrics for "validation" but does not explicitly provide details about a dedicated validation dataset split (e.g., percentages or sample counts) distinct from the training and test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU models, or detailed computer specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python, PyTorch, or TensorFlow versions). |
| Experiment Setup | Yes | For the i Naturalist-19 dataset, we initialize the Res Net-18 weights with a pre-trained Image Net model and train the classifiers using a customized SGD optimizer for 100 epochs, with different learning rates for the backbone network and the fully connected layer (0.01 and 0.1, respectively). For the tiered Image Net-H dataset, we train the Res Net-18 from scratch (since it is derived from the Image Net dataset) and use the AMSGrad variant of the Adam optimizer with a learning rate of 1e 5 for 120 epochs. |