Multivariate Triangular Quantile Maps for Novelty Detection
Authors: Jingjing Wang, Sun Sun, Yaoliang Yu
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments over a number of real datasets confirm the efficacy of our proposed method against state-of-the-art alternatives. |
| Researcher Affiliation | Academia | University of Waterloo1, National Research Council Canada2 |
| Pseudocode | No | No explicit pseudocode or algorithm blocks are provided in the paper. |
| Open Source Code | Yes | Our code is available at https://github.com/Gin Gin Wang/MTQ. |
| Open Datasets | Yes | In our experiments, we use two public image datasets: MNIST and Fashion-MNIST, as well as two non-image datasets: KDDCUP and Thyroid. A detailed description of these datasets, the applied network architectures, and the training hyperparameters can be found in Appendix A. Also: "Yann Le Cun. The mnist database of handwritten digits. http://yann.lecun.com/ exdb/mnist/."; "Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. ar Xiv:1708.07747, 2017."; "Moshe Lichman. UCI machine learning repository. http://kdd.ics.uci.edu/ databases/kddcup99."; "Moshe Lichman. UCI machine learning repository. http://archive.ics.uci.edu/ ml." |
| Dataset Splits | Yes | For every class, we hold out 10% of the training set as the validation set, which is used to tune hyperparameters and to monitor the training process. |
| Hardware Specification | No | No specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) are provided for the experimental setup in the main text. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers in the main text. |
| Experiment Setup | No | The paper states that "A detailed description of these datasets, the applied network architectures, and the training hyperparameters can be found in Appendix A," indicating that specific experimental setup details are not provided in the main text. |