Contextual Outlier Interpretation
Authors: Ninghao Liu, Donghwa Shin, Xia Hu
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on various types of datasets demonstrate the flexibility and effectiveness of the proposed framework. |
| Researcher Affiliation | Academia | Ninghao Liu,1 Donghwa Shin,1 Xia Hu1,2 1Department of Computer Science and Engineering, Texas A&M University 2Center for Remote Health Technologies and Systems, Texas A&M Engineering Experiment Station {nhliu43, donghwa shin, xiahu}@tamu.edu |
| Pseudocode | No | The paper does not contain a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | We use both real and synthetic datasets in experiments. ... The real-world datasets used in our experiments include Wisconsin Breast Cancer (WBC) dataset [Asuncion and Newman, 2007], MNIST dataset and Twitter spammer dataset [Yang et al., 2011]. |
| Dataset Splits | Yes | The parameters of SVMs are tuned by validation, where some samples from Oi and Ci are randomly selected as the validation set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions tools like SVMs, RBM, and neural networks but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | No | The paper mentions that 'parameters of SVMs are tuned by validation', but it does not provide specific hyperparameter values (e.g., learning rate, batch size) or detailed system-level training settings. |