ANEDL: Adaptive Negative Evidential Deep Learning for Open-Set Semi-supervised Learning
Authors: Yang Yu, Danruo Deng, Furui Liu, Qi Dou, Yueming Jin, Guangyong Chen, Pheng Ann Heng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | As demonstrated empirically, our proposed method outperforms existing state-of-the-art methods across four datasets. |
| Researcher Affiliation | Academia | Yang Yu1,*, Danruo Deng1,*, Furui Liu2, Qi Dou1, Yueming Jin3, , Guangyong Chen2, , Pheng Ann Heng1 1The Chinese University of Hong Kong, Hong Kong, China 2Zhejiang Lab, Hangzhou, China 3National University of Singapore, Singapore |
| Pseudocode | No | The paper describes its method in detail but does not provide structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide a concrete link to source code or explicitly state that the code for the methodology is being released. |
| Open Datasets | Yes | We evaluate our method on four datasets, including CIFAR10, CIFAR-100 (Krizhevsky, Hinton et al. 2009), Image Net-30 (Hendrycks and Gimpel 2016) and Mini-Image Net (Vinyals et al. 2016). |
| Dataset Splits | Yes | Given a dataset, we first split its class set into inlier and outlier class sets. Next, we randomly select inliers to form labeled, unlabeled, validation, and test dataset. |
| Hardware Specification | No | The paper states: 'Experiments on Image Net-30 are conducted with a single 24-GB GPU and other experiments are conducted with a single 12-GB GPU.' However, it does not provide specific GPU models or other hardware details like CPU type or memory capacity beyond VRAM. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as programming language versions, library versions (e.g., PyTorch, TensorFlow), or specific solver versions. |
| Experiment Setup | Yes | For the pre-training stage, we set its length EF M as 10 for all experiments. We set the length of an epoch as 1024 steps, which means we select a new set of inliers every 1024 steps during self-training. For hyper-parameters of Fix Match, we set them the same as in Open Match (Saito, Kim, and Saenko 2021). We adopt SGD optimizer with 0.0001 weight decay and 0.9 momentum to train our model by setting initial learning rate as 0.03 with the cosine decrease policy. We set the batch size of labeled and unlabeled samples as 64 and 128 respectively for all experiments. |