Latent Discriminant Subspace Representations for Multi-View Outlier Detection
Authors: Kai Li, Sheng Li, Zhengming Ding, Weidong Zhang, Yun Fu
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on six datasets show our method outperforms the existing ones in identifying all types of multi-view outliers, often by large margins. |
| Researcher Affiliation | Collaboration | Department of Electrical & Computer Engineering, Northeastern University, Boston, USA Adobe Research, USA AI & Big Data Division, JD.COM American Technologies Corporation, USA College of Computer & Information Science, Northeastern University, Boston, USA |
| Pseudocode | Yes | Algorithm 1. Optimization of (3) |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We employ six datasets for performance evaluation. Among them, five come from UCI Machine Learning Repository1, i.e., zoo, letter, wine, wdbc, and pima. Table 1 shows the basic information about the five datasets. ... The last dataset is BUAA Vis Nir, which comprises of facial images of 150 persons... 1http://archive.ics.uci.edu/ml/ |
| Dataset Splits | No | The paper mentions training, testing, and generating outliers but does not provide specific details on how the dataset was split into training, validation, or test sets for reproduction, nor does it refer to standard splits for the datasets used. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of existing methods like HOAD, AP, MLRA, and DMOD, but does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The paper mentions tuning parameters for methods and setting default values for its own parameters (e.g., 'We set α = 1 and β = 1 as default. We further evaluate the impact of λ on the performance with fixed α and β. Figure 2(b) shows the change of AUC with respect to different values of λ. We can see that the proposed method maintains good performances within a wide range for the value of λ. In practice, we choose λ = 0.1 as default.'), but it does not provide a comprehensive or explicitly labeled 'experimental setup' section with all hyperparameters and training configurations needed for reproduction. |