Deep Structured Energy Based Models for Anomaly Detection
Authors: Shuangfei Zhai, Yu Cheng, Weining Lu, Zhongfei Zhang
ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive empirical studies on benchmark tasks demonstrate that our proposed model consistently matches or outperforms all the competing methods. |
| Researcher Affiliation | Collaboration | Binghamton Univeristy, Vestal, NY 13902, USA. IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA. Tsinghua University, Beijing 10084, China. |
| Pseudocode | No | The paper does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | The speciļ¬cations of benchmark datasets used are summarized in Table 1. (KDD99 (Lichman, 2013), Thyroid, Usenet from the UCI repository (Lichman, 2013), CUAVE (Patterson et al., 2002), NATOPS (Patterson et al., 2002), Caltech-101 (Fei-Fei et al., 2007), MNIST (Lecun et al., 1998), CIFAR-10 (Krizhevsky & Hinton, 2009)). |
| Dataset Splits | Yes | The training and test sets are split by 1:1 and only normal samples are used for training the model. The datasets are split into training and test by 2:1, where 2/3 of the normal samples are used for training split. Each dataset is split into a training and testing set with a ratio of 2:1. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | No | The paper describes the model architectures and training method generally but does not specify concrete hyperparameters (e.g., learning rate, batch size, number of epochs) or other system-level training configurations for their experiments. |