Cross-Modal Subspace Clustering via Deep Canonical Correlation Analysis
Authors: Quanxue Gao, Huanhuan Lian, Qianqian Wang, Gan Sun3938-3945
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on several real-world datasets demonstrate the proposed method outperforms the state-of-the-art methods. |
| Researcher Affiliation | Academia | 1State Key Laboratory of Integrated Services Networks, Xidian University, Xi an 710071, China. 2State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, China. |
| Pseudocode | Yes | Algorithm 1 CMSC-DCCA |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code availability for the described methodology. |
| Open Datasets | Yes | Datasets Settings: The used datasets in our experiments include: 1) FRGC Dataset (Yang, Parikh, and Batra 2016); 2) Fashion-MNIST Dataset (Xiao, Rasul, and Vollgraf 2017); 3) YTF Dataset (Wolf, Hassner, and Maoz 2011); 4) COIL-20 Dataset |
| Dataset Splits | No | The paper mentions 'train' steps but does not provide specific details on dataset splits (e.g., percentages or exact sample counts) for training, validation, or test sets. |
| Hardware Specification | Yes | NVIDIA Titan Xp Graphics Processing Units (GPUs) and 64 GB memory size. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify its version number or any other software dependencies with version information. |
| Experiment Setup | Yes | Implementation details: In our model, we use the four-layer encoders including three convolution encoding layers and a fully connected layer, and the corresponding decoders consist of a fully connected layer and three deconvolution decoding layers. More speciļ¬c settings are given in Table 1. ... We set the learning-rate to 0.001. ... We set 10000 epochs to train the entire network |