Multi-View Representation Learning with Manifold Smoothness
Authors: Shu Li, Wei Wang, Wen-Tao Li, Pan Chen8447-8454
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on real-world datasets reveal that our Mv DGAT can achieve better performance than state-of-the-art methods. |
| Researcher Affiliation | Academia | Shu Li, Wei Wang , Wen-Tao Li, Pan Chen National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China {lis, wangw, liwt, chenp}@lamda.nju.edu.cn |
| Pseudocode | Yes | Algorithm 1 Mv DGAT |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | Course dataset is a webpage classification dataset... (Du et al. 2013; Jing et al. 2017); Advertisement dataset... (Zhao et al. 2018; Zhou, Liu, and Shao 2018); MNIST dataset... adopt the setting used in (Wang et al. 2015). |
| Dataset Splits | Yes | For each dataset, we randomly sample 10% data as the validation set, randomly sample γ (γ = 5%, 10%, 15%) of the remaining data as the labeled data, and use the rest data as the unlabeled data. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like 'Adam' and 'linear SVMs' but does not specify their version numbers or the programming environment (e.g., Python, PyTorch versions). |
| Experiment Setup | Yes | For Mv DGAT, we first construct k-nearest-neighbor graph for each view with exp( d(xv(i),xv(j)) / σ2 ) for different distance metrics, k {1, 3, 5, 7, 9}, d(xv(i), xv(j)) is set to be Euclidean distance or cosine distance, and σ {10-2, 10-1, 1}, v = 1, 2. ... The final dimension of each output layer is selected from {5, 10, 20, 50, 100} ... In the training process, we use dropout rate p = 0.3 and use Re LU as the activation function... f1 and f2 are trained for a maximum of 200 epochs using Adam (Kingma and Ba 2015) and early stopping with a window size of 5. |