Robust One-Class Classification with Signed Distance Function using 1-Lipschitz Neural Networks

Authors: Louis Béthune, Paul Novello, Guillaume Coiffier, Thibaut Boissin, Mathieu Serrurier, Quentin Vincenot, Andres Troya-Galvis

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that OCSDF is competitive against concurrent methods on tabular and image data while being way more robust to adversarial attacks, illustrating its theoretical properties. Finally, as exploratory research perspectives, we theoretically and empirically show how OCSDF connects OCC with image generation and implicit neural surface parametrization.
Researcher Affiliation Collaboration 1IRIT, Université Paul Sabatier 2DEEL, IRT Saint Exupéry 3Université de Lorraine, CNRS, Inria, LORIA 4Thales Alénia Space.
Pseudocode Yes The procedure is summarized in algorithm 1. [...] Algorithm 1 Adapted Newton Raphson for Complementary Distribution Generation. [...] The final procedure of OCSDF is shown in Figure 1 and detailed in algorithm 3 of Appendix B. [...] Algorithm 2 Alternating Minimization for Signed Distance Function learning. [...] Algorithm 3 One Class Signed Distance Function learning.
Open Source Code Yes Our code is available at https://github.com/Algue-Rythme/OneClassMetricLearning.
Open Datasets Yes We use two-dimensional toy examples from the Scikit-Learn library (Pedregosa et al., 2011). [...] We tested our algorithm on some of the most prominent anomaly detection benchmarks of ODDS library (Rayana, 2016). [...] train a classifier on each of the classes of MNIST and Cifar10. [...] We use models from Princeton s Model Net10 dataset (Wu et al., 2015).
Dataset Splits No The paper mentions 'train/test splits' for data but does not explicitly specify a 'validation' split or how validation data was used for hyperparameter tuning or early stopping during training.
Hardware Specification Yes The hardware consists of a workstation with NVIDIA 1080 GPU with 8GB memory and a machine with 32GB RAM.
Software Dependencies No The paper mentions 'tensorflow' and the 'Deel Lip1' library, but it does not specify their version numbers, which is necessary for reproducibility.
Experiment Setup Yes The optimizer is RMSprop with default hyperparameters. We chose m = 0.02 [...] We take λ = 200. We use a batch size b = 128, and a number of steps T = 16. [...] The network is trained for a total of 70 epochs over the one class [...], using a warm start of 10 epoch with T = 0. The learning rate follows a linear decay from 1e 3 to 1e 6.