Partial Multi-View Outlier Detection Based on Collective Learning
Authors: Jun Guo, Wenwu Zhu
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on benchmark datasets show that our proposed approach consistently and significantly outperforms state-of-the-art baselines. |
| Researcher Affiliation | Academia | Jun Guo,1 Wenwu Zhu1,2 1 Tsinghua-Berkeley Shenzhen Institue, Tsinghua University, Shenzhen 518055, China 2 Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China |
| Pseudocode | Yes | Algorithm 1 Partial Multi-view Outlier Detection Based on Collective Learning |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | Oxford Flowers Dataset (Flowers) (Nilsback and Zisserman 2006) is comprised of 17 flower classes, with 80 images per class. ... USPS-MNIST Dataset combines two popular handwritten datasets, USPS (Hull 1994) and MNIST (Le Cun et al. 1998). |
| Dataset Splits | No | The paper describes how samples are selected and how partiality and outliers are generated ('randomly select 50 images per digit', 'delete the same number of samples', 'Partial Object Ratio (POR) from 0% to 75%', 'randomly perturb 10% of all data'), but it does not specify explicit train, validation, or test dataset splits in terms of percentages or counts for model training and evaluation. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies or their version numbers (e.g., programming languages, libraries, frameworks, or solvers). |
| Experiment Setup | Yes | Since AUC is adopted as the evaluation metric, we do not need to specify the threshold τ in Algorithm 1. Then, there are two major parameters, i.e., the number of self-guided iterations T and the number of nearest neighbors k. ... we set a maximum iteration number T for the self-guided iteration. ... we judge the alternating optimization to be converged as long as the value of Eq.(7) changes not obviously ( 10 7). ... Regarding parameter k, we limit it to a certain percentage of the total number of objects. It is observed that our proposed method achieves a relatively good performance when the proportion is in the range of [3%, 5%]. |