Frequency Shuffling and Enhancement for Open Set Recognition
Authors: Lijun Liu, Rui Wang, Yuan Wang, Lihua Jing, Chuan Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on various benchmarks demonstrate that the proposed Fre SH consistently trumps the stateof-the-arts by a considerable margin. |
| Researcher Affiliation | Academia | 1Institute of Information Engineering, CAS, Beijing, China 2School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3Department of Electronic Engineering,Tsinghua University {liulijun, wangrui, jinglihua, wangchuan}@iie.ac.cn, wy23@mails.tsinghua.edu.cn |
| Pseudocode | No | The paper describes the proposed methods using detailed text and figures (e.g., Figure 3 for Fre SH overview, Figure 4 for DWT), but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | we evaluate the proposed method on MNIST (Le Cun et al. 2010), SVHN (Netzer et al. 2011), CIFAR10 (Krizhevsky, Hinton et al. 2009), CIFAR+10, CIFAR+50, Tiny-Image Net (Le and Yang 2015). |
| Dataset Splits | Yes | Following standard OSR protocols (Guo et al. 2021; Neal et al. 2018), we follow the commonly used splits (Neal et al. 2018). |
| Hardware Specification | Yes | All experiments are conducted with NVIDIA RTX 3090 GPU support. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and VGG32 as the backbone network, but it does not specify versions for any programming languages, libraries, or other software dependencies. |
| Experiment Setup | Yes | We use the Adam optimizer with a batch size of 128 for 600 epochs. The learning rate starts at 0.1 and decays by a factor of 0.1 every 120 epochs. |