HRN: A Holistic Approach to One Class Learning

Authors: Wenpeng Hu, Mengyu Wang, Qi Qin, Jinwen Ma, Bing Liu

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental evaluation using both benchmark image classification and traditional anomaly detection datasets show that HRN markedly outperforms the state-of-the-art existing deep/non-deep learning models.
Researcher Affiliation Academia 1Department of Information Science, School of Mathematical Sciences, Peking University 2Wangxuan Institute of Computer Technology, Peking University 3Center for Data Science, AAIS, Peking University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code of HRN can be found here3. https://github.com/morning-dews/HRN
Open Datasets Yes MNIST [47]6 is a handwritten digit classification dataset... http://yann.lecun.com/exdb/mnist/; f MNIST (fashion-MNIST) [84]7 consists of a training set... https://github.com/zalandoresearch/fashion-mnist; CIFAR-10 [44]8 is also an image classification dataset... https://www.cs.toronto.edu/ kriz/cifar.html; KDDCUP99 9... http://kdd.ics.uci.edu/databases/kddcup99; Thyroid 10 uses the version... http://archive.ics.uci.edu/ml; Arrhythmia 11 uses the data split... http://archive.ics.uci.edu/ml
Dataset Splits Yes MNIST [47]... 60,000 for training and 10,000 for testing.; f MNIST (fashion-MNIST) [84]... 60,000 for training and 10,000 for testing.; CIFAR-10 [44]... 50,000 for training and 10,000 for testing.; TQM [82] set 10% of the data as the validation set for each dataset. ... We followed the TQM approach and used grid search. However, we used only the MNIST data to search for hyper-parameter values and then applied the values to all 5 datasets.
Hardware Specification No The paper does not explicitly describe the hardware used, such as specific GPU/CPU models or cloud resources. It only mentions 'Each experiment on a class takes less than 5 minutes.'
Software Dependencies No The paper mentions 'SGD with moment as the optimizer' and 'Re LU or Leaky-Re LU' as activation functions, but it does not specify version numbers for any software dependencies or libraries used for implementation.
Experiment Setup Yes A MLP of size [784-100]-[100-1] is used for MNIST and f MNIST, of size 3*[1024-300]-[900-300]-[300-1] for CIFAR-10 and of size [125-100]-[100-1] for KDDCUP99, and of size [6-100]-[100-1] for Thyroid. We use SGD with moment as the optimizer. The learning rate is 0.1. ... We run HRN 100 epochs... for λ, from 0 to 1 with step 0.05 and for n, from 1 to 20 with step 1. ... we get λ = 0.1 and n = 12... which were applied to all datasets in all experiments without change.