Few-Shot One-Class Classification via Meta-Learning

Authors: Ahmed Frikha, Denis Krompaß, Hans-Georg Köpken, Volker Tresp7448-7456

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on eight datasets from the image and time-series domains show that our method leads to better results than classical OCC and few-shot classification approaches, and demonstrate the ability to learn unseen tasks from only few normal class samples. Moreover, we successfully train anomaly detectors for a real-world application on sensor readings recorded during industrial manufacturing of workpieces with a CNC milling machine, by using few normal examples.
Researcher Affiliation Collaboration Ahmed Frikha 1, 2, 4, Denis Krompaß 1, 2, Hans-Georg K opken 3, Volker Tresp 2, 4 1Siemens AI Lab 2Siemens Technology 3Siemens Digital Industries 4Ludwig Maximilian University of Munich ahmed.frikha@siemens.com
Pseudocode Yes Algorithm 1 Meta-training of OC-MAML
Open Source Code Yes Code available under https://github.com/Ahmed Frikha/Few Shot-One-Class-Classification-via-Meta-Learning
Open Datasets Yes We evaluate our approach on 8 datasets from the image and time-series domains, including two synthetic time-series (STS) datasets that we propose as a benchmark for FS-OCC on time-series, and a real-world sensor readings dataset of CNC Milling Machine Data (CNCMMD). ... Table 1 shows the results averaged over 5 seeds of the classical OCC approaches (Top) and the meta-learning approaches, namely MAML, FOMAML, Reptile and their one-class versions (Bottom), on 3 image datasets and on the STS-Sawtooth dataset. ... Mini Image Net (MIN), Omniglot (Omn), MT-MNIST with Ttest = T0 and STS-Sawtooth (Saw).
Dataset Splits Yes To assess the model s adaptation ability to unseen tasks, the available tasks are divided into mutually disjoint task sets: one for meta-training Str, one for metavalidation Sval and one for meta-testing Stest. Each task Ti is divided into two disjoint sets of data, each of which is used for a particular MAML operation: Dtr is used for adaptation and Dval is used for validation, i.e., evaluating the adaptation. ... Algorithm 1: Require: Str: Set of meta-training tasks Require: α, β: Learning rates Require: K, Q: Batch size for the inner and outer updates Require: c: CIR for the inner-updates
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions "some modules of the py Meta library (Spigler 2019)" but does not provide specific version numbers for this or any other software dependencies.
Experiment Setup Yes Algorithm 1: Require: α, β: Learning rates Require: K, Q: Batch size for the inner and outer updates Require: c: CIR for the inner-updates. ... The cross-entropy loss was used for L. ... We conducted experiments using 5 different seeds and present the average in Table 4.