Generative Probabilistic Novelty Detection with Adversarial Autoencoders
Authors: Stanislav Pidhorskyi, Ranya Almohsen, Gianfranco Doretto
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | An extensive set of results show that the approach achieves state-of-the-art performance on several benchmark datasets. Section 6 shows a rich set of experiments showing that GPND is very effective and produces state-of-the-art results on several benchmarks. |
| Researcher Affiliation | Academia | Lane Department of Computer Science and Electrical Engineering West Virginia University, Morgantown, WV 26506 |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | All reported results are from our publicly available implementation1, based on the deep machine learning framework Py Torch [46]. 1https://github.com/podgorskiy/GPND |
| Open Datasets | Yes | We evaluate GPND on the following datasets. MNIST [37]... The Coil-100 dataset [47]... Fashion-MNIST [48]... CIFAR-10(CIFAR-100) [49]... |
| Dataset Splits | Yes | Results are averages from a 5-fold cross-validation. Each fold takes 20% of each class. 60% of each class is used for training, 20% for validation, and 20% for testing. |
| Hardware Specification | Yes | The entire training procedure takes about one hour with a high-end PC with one NVIDIA TITAN X. |
| Software Dependencies | No | The paper mentions 'deep machine learning framework Py Torch [46]' but does not provide a specific version number or other software dependencies with versions. |
| Experiment Setup | Yes | For MNIST and COIL-100 the latent space size was chosen to maximize F1 on the validation set. It is 16, and we varied it from 16 to 64 without significant performance change. For CIFAR-10 and CIFAR-100, the latent space size was set to 256. The hyperparameters of all losses are one, except for Lerror and Ladv dz when optimizing for Dz, which are equal to 2.0. For CIFAR-10 and CIFAR-100, the hyperparameter of Lerror is 10.0. We use the Adam optimizer with learning rate of 0.002, batch size of 128, and 80 epochs. |