Co-GCN for Multi-View Semi-Supervised Learning
Authors: Shu Li, Wen-Tao Li, Wei Wang4691-4698
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on real-world data sets verify that Co-GCN can achieve better performance compared with state-of-the-art multi-view semi-supervised methods. |
| Researcher Affiliation | Academia | Shu Li, Wen-Tao Li, Wei Wang National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China {lis, liwt, wangw}@lamda.nju.edu.cn |
| Pseudocode | Yes | Algorithm 1 Co-GCN Input: X1, X2, Y, and adjacency matrices A1 and A2. Parameter: T1, T2, απ and αW. 1: for v = 1 to 2 do 2: for t = 1 to Tv do 3: Calculate the gradient of Gv with the loss function in Equation (2); 4: Fix {π(k) vw}, update {W(k) v } in Gv with the learning rate αW; 5: Calculate the gradient of Gv with the loss function in Equation (2); 6: Fix {W(k) v }, update {π(k) vw}, according to Equation (11) with the learning rate απ. 7: end for 8: end for Output: Aggregation of G1 and G2 according to Equation (3). |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or provide links to a code repository for the methodology described. |
| Open Datasets | Yes | The Reuters data set (Bisson and Grimal 2012) is constructed from the Reuters RCV1/RCV2 Multilingual test collection. ... We follow previous work (Li et al. 2015) and select the widely used 7 classes to get 1474 images, which we call Cal7. |
| Dataset Splits | Yes | We randomly sample 10% data as the validation set, and then randomly sample γ (γ = 1%, 5%, 10%) of the remaining data as the labeled data, and the remainder of the data are used as the unlabeled data. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or cloud computing instances. |
| Software Dependencies | No | The paper mentions software components like "dropout", "ReLU", "softmax", and "Adam" for optimization, but it does not specify any version numbers for these or the underlying machine learning framework. |
| Experiment Setup | Yes | We use dropout (p = 0.3) after each layer, use Re LU as the activation function in the hidden layer, and use softmax activation function in the output layer. ... We train all models for a maximum of 2500 epochs (training iterations) using Adam (Kingma and Ba 2015) with a learning rate of αW = 10 3 and early stopping with a window size of 50, i.e., we stop the training process if the validation accuracy does not increase for 50 consecutive epochs. When updating π(k) vw, the learning rate απ is set to be 10 2. |