Deep Adversarial Multi-view Clustering Network
Authors: Zhaoyang Li, Qianqian Wang, Zhiqiang Tao, Quanxue Gao, Zhaohua Yang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on several real-world datasets demonstrate the proposed method outperforms the state-of art methods. 4 Experiments |
| Researcher Affiliation | Academia | 1State Key Laboratory of Integrated Services Networks, Xidian University, Xi an 710071, China. 2Department of Electrical and Computer Engineering, Northeastern University, USA. 3School of Instrumentation Science and Opto-electronics Engineering, Beihang University, China. |
| Pseudocode | No | No pseudocode or algorithm blocks were found. |
| Open Source Code | No | The paper does not provide a link to open-source code or explicitly state that the code is available. |
| Open Datasets | Yes | Handwritten numerals (HW) dataset [Asuncion and Newman, 2007] is composed of 2,000 data points from 0 to 9 ten digit classes and each class has 200 data points. BDGP [Cai et al., 2012] is a twoview dataset including two different modalities, i.e., visual and textual data. The Columbia Consumer Video (CCV) dataset [Jiang et al., 2011] contains 9,317 You Tube videos with 20 diverse semantic categories. MNIST is a widely-used benchmark dataset consisting of handwritten digit images with 28 28 pixels. In our experiment, we employ its two-view version (70, 000 samples) provided by [Shang et al., 2017] |
| Dataset Splits | No | The paper does not explicitly provide details about training, validation, and test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | Yes | We run all the experiments on the platform of Ubuntu Linux 16.04 with NVIDIA Titan Xp Graphics Processing Units (GPUs) and 32 GB memory size. |
| Software Dependencies | No | The paper mentions "Py Torch" and "Adam optimizer" but does not specify version numbers for these software components. |
| Experiment Setup | Yes | We use Adam [Kingma and Ba, 2014] optimizer with default parameter setting to train our model and fix the learning rate as 0.0001. We conduct 30 epochs for each training step. |