LUNA: Localizing Unfamiliarity Near Acquaintance for Open-Set Long-Tailed Recognition
Authors: Jiarui Cai, Yizhou Wang, Hung-Min Hsu, Jenq-Neng Hwang, Kelsey Magrane, Craig S Rose131-139
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our proposed method exceeds the state-of-the-art algorithm by 4-6% in the closed-set recognition accuracy and 4% in F-measure under the open-set on the public benchmark datasets, including our own newly introduced fine-grained OLTR dataset about marine species (MSLT), which is the first naturally-distributed OLTR dataset revealing the genuine genetic relationships of the classes. |
| Researcher Affiliation | Collaboration | Jiarui Cai1, Yizhou Wang1, Hung-Min Hsu1, Jenq-Neng Hwang1, Kelsey Magrane2, Craig Rose3 1 University of Washington 2 Pacific States Marine Fisheries Commission (PSMFC) 3 Fish Next Research |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | The Image Net-LT (Liu et al. 2019) dataset is re-sampled from a subset of the original Image Net-2012 (Deng et al. 2009) following Pareto distribution. Extra 10 classes from Image Net-2010 make up the open-set. The Places-LT (Liu et al. 2019) dataset is re-sampled from Place-2 dataset (Zhou et al. 2017) for scene recognition. There are 69 new classes in Place-Extra69 used as open-set. Our proposed Marine Species (MS)-LT dataset is naturally long-tailed distributed (in Figure 3). It is collected from Gulf of Alaska and the Aleutian Islands in the U.S. during 2015 to 2019. |
| Dataset Splits | Yes | The training set follows a long-tailed distribution, while the testing and validation sets are balanced following the configuration of other long-tailed datasets. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or specific cloud instance types used for experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as library names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | We use Res Net-10, Res Net-152 and Res Net-32 as the backbone for Image Net-LT, Place LT and MS-LT, respectively. Following the two-stage decoupling training scheme (Kang et al. 2019), we first train the model with the original imbalanced dataset by stochastic gradient descent (SGD) with the momentum of 0.9 and weight decay of 2 10 4 in minibatch size of 128 for 180 epochs; then continue training the model with progressivelybalanced re-sampling (Kang et al. 2019) with learning rate 0.05 for an extra 50 epochs. |