Deep Graph Mating

Authors: Yongcheng Jing, Seok-Hee Hong, Dacheng Tao

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments across diverse domains, including node and graph property prediction, 3D object recognition, and large-scale semantic parsing, demonstrate that the proposed Du MCC effectively enables training-free knowledge transfer, yielding results on par with those of pre-trained models. ... We evaluate the performance of Du MCC across seven benchmarks spanning five GNN architectures. More ablation studies and sensitivity analyses, additional results and implementation details, as well as more visualisations, are detailed in Secs. D and E of the appendix.
Researcher Affiliation Academia Yongcheng Jing1 Seok-Hee Hong1 Dacheng Tao2 1University of Sydney 2Nanyang Technological University {yongcheng.jing,seokhee.hong}@sydney.edu.au, dacheng.tao@ntu.edu.sg
Pseudocode Yes Algorithm 1 The proposed Dual-Message Coordinator and Calibrator (Du MCC) for GRAMA.
Open Source Code Yes Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: The code and model are provided in the supplementary material.
Open Datasets Yes For multi-class classification tasks on ogbn-arxiv [17], ogbn-products [4], and Model Net40 [55], we adopt the dataset partition strategy widely used in model merging within the Euclidean domain [1, 28]. Specifically, each dataset is randomly split into two disjoint subsets: the first subset comprises 20% of the data with odd labels and 80% with even labels, while the second subset is arranged vice versa. For the semantic segmentation task on S3DIS [3]... In the multi-label classification task on ogbn-proteins [17]...
Dataset Splits Yes For multi-class classification tasks on ogbn-arxiv [17], ogbn-products [4], and Model Net40 [55], we adopt the dataset partition strategy widely used in model merging within the Euclidean domain [1, 28]. Specifically, each dataset is randomly split into two disjoint subsets: the first subset comprises 20% of the data with odd labels and 80% with even labels, while the second subset is arranged vice versa.
Hardware Specification No The paper states that 'Computer resources needed are detailed in Sect. E of the appendix.' However, Appendix E is not provided in the given text, and thus no specific hardware models, processor types, or memory details are present in the extract.
Software Dependencies No The paper states 'Implementation follows the official codes provided by the Deep Graph Library (DGL) [52] and the original authors, including detailed architectures and hyperparameter settings.' It mentions DGL but does not specify any version numbers for DGL or other software components. It points to Appendix E for more details, but that appendix is not provided.
Experiment Setup Yes We set the interpolation factor α in Eq. 1 to 0.5 for all experiments, with a sensitivity analysis provided in Sect. D of the appendix. For models originally equipped with normalisation layers, we recompute the running mean and running variance for the student GNN.