Towards Non-Parametric Drift Detection via Dynamic Adapting Window Independence Drift Detection (DAWIDD)
Authors: Fabian Hinder, André Artelt, Barbara Hammer
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate and compare DAWIDD with different state of the art drift detection methods we use HDDDM (Ditzler and Polikar, 2011), DDM (Gama et al., 2004), EDDM (Baena-Garc ıa et al., 2006) and ADWIN (Bifet and Gavald a, 2007), since these methods cover representative different drift-detection schemes. We run our experiments on several standard benchmark data sets. |
| Researcher Affiliation | Academia | 1Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany. |
| Pseudocode | Yes | Algorithm 1 Dynamic Adaptive Window Independence Drift Detector (DAWIDD) |
| Open Source Code | Yes | The code is available at https://github.com/FabianHinder/DAWIDD |
| Open Datasets | Yes | We run our experiments on several standard benchmark data sets. For reasons of simplicity we used a sliding window in our implementation. Theoretical data We use the following theoretical data sets each data set contains 4 concepts and thus 3 concept drifts: Rotating hyperplane (Montiel et al., 2018) (200 samples per concept), SEA (Street and Kim, 2001) (400 samples per concept) and Random RBF (Montiel et al., 2018) (200 samples per concept). Real world data We use a total number of three real world data sets: Electricity market prices data set (Harries et al., 1999), Forest Cover Type data set (Blackard et al., 1998) and the Weather data set (Elwell and Polikar, 2011). |
| Dataset Splits | No | The paper mentions using 'standard benchmark data sets' but does not specify how these datasets were split into training, validation, and test sets, either by percentages, counts, or by referring to a known splitting methodology. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions software components like 'Gaussian Naive Bayes classifier' and 'RBF-SVMs' but does not provide specific version numbers for these or any other ancillary software dependencies used in the experiments. |
| Experiment Setup | No | The paper states, 'We used standard hyperparameter settings' and discusses parameters like 'window size' (nmin, nmax) and 'p-value' as inputs to the DAWIDD algorithm. However, it does not provide the specific concrete values or ranges used for these parameters in the actual experiments, nor does it detail other training configurations or system-level settings. |