Unsupervised Learning of the Set of Local Maxima

Authors: Lior Wolf, Sagie Benaim, Tomer Galanti

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that the method is able to outperform one-class classification algorithms in the task of anomaly detection and also provide an additional signal that is extracted in a completely unsupervised way. and 4 EXPERIMENTS Since we share the same form of input with one-class classification, we conduct experiments using one-class classification benchmarks.
Researcher Affiliation Collaboration Lior Wolf Facebook AI Research & The School of Computer Science Tel Aviv University wolf@fb.com, wolf@cs.tau.ac.il and Sagie Benaim & Tomer Galanti The School of Computer Science Tel Aviv University sagieb@mail.tau.ac.il tomerga2@post.tau.ac.il
Pseudocode Yes Algorithm 1 Training c and h
Open Source Code No The paper does not include any explicit statements about releasing source code or provide links to a code repository for the methodology described.
Open Datasets Yes For example, in MNIST, the set S is taken to be the set of all training images of a particular digit. When applying our method, we train h and c on this set. and We employ CIFAR also to perform an ablation analysis and German Traffic Sign Recognition (GTSRB) Benchmark of Houben et al. (2013). and Cancer Genome Atlas (https://cancergenome.nih.gov/).
Dataset Splits No We split the data to 90% train and 10% test. (for Cancer Genome Atlas). For MNIST/CIFAR, it implies standard train/test splits: For MNIST, there is one experiment per digit, where the training samples are the training set of this digit... positive points are now the MNIST test images of the same digit used for training, and negative points are the test images of all other digits. The specific word "validation" with a split is not present for model tuning.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using the 'ADAM optimization scheme' and implementing neural networks, but it does not provide specific software dependencies with version numbers, such as Python versions, deep learning framework versions (e.g., PyTorch, TensorFlow), or other library versions.
Experiment Setup Yes The ADAM optimization scheme is used with mini-batches of size 32. and In all our experiments we set λ = 1.