The Tilted Variational Autoencoder: Improving Out-of-Distribution Detection
Authors: Griffin Floto, Stefan Kremer, Mihai Nica
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate that this simple change in the prior distribution improves VAE performance on the task of detecting unsupervised out-of-distribution (OOD) samples. We also introduce a new OOD testing procedure, called the Will-It-Move test, where the tilted Gaussian achieves remarkable OOD performance. and Table 2: AUROC comparison between OOD detection methods with Fashion-MNIST and CIFAR10 as training distributions for various OOD sets. |
| Researcher Affiliation | Academia | Griffin Floto University of Toronto griffin.floto@mail.utoronto.ca Stefan Kremer, Mihai Nica University of Guelph {skremer,nicam}@uoguelph.ca |
| Pseudocode | No | The paper describes the proposed methods in detail but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/anonconfsubaccount/tilted_prior |
| Open Datasets | Yes | The datasets that are used in our experiments are MNIST, Fashion-MNIST, KMNIST, CIFAR-10, SVHN, Celeb A and LSUN. |
| Dataset Splits | No | The paper mentions training data and test data, but does not provide specific train/validation/test splits with percentages, exact sample counts, or citations to predefined splits for general reproducibility. It describes data usage for the WIM test, "The WIM test was implemented by using batches that had equal parts training data, in-distribution test data, and OOD data. (We used 256 images of each type.)". |
| Hardware Specification | Yes | Experiments were run on an NVIDIA 3090 GPU with a Ryzen 3800X CPU |
| Software Dependencies | No | The paper mentions using IWAE, ADAM optimizer, but does not provide specific version numbers for software dependencies such as Python, PyTorch, TensorFlow, or CUDA. |
| Experiment Setup | Yes | For all tests we train the VAE for 250 epochs with a batch size of 64. We use the ADAM optimizer with a learning rate of 10^-4 and clip gradients that are greater than 100. The parameters used for the tilted prior are τ = 30 for the Fashion-MNIST test and τ = 25 for the CIFAR-10 test. The WIM test uses α = 0.1 for all of experiments. |