Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Expand Horizon: Graph Out-of-Distribution Generalization via Multi-Level Environment Inference

Authors: Jiaqiang Zhang, Songcan Chen

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets demonstrate that our model achieves satisfactory performance compared with the state-of-the-art methods under various distribution shifts. We conduct extensive experiments on six benchmark datasets under various distribution shifts. The results verify the effectiveness of our proposal compared with existing state-of-the-art methods.
Researcher Affiliation Academia 1 College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics 2 MIIT Key Laboratory of Pattern Analysis and Machine Intelligence EMAIL
Pseudocode No The paper describes the methodology using mathematical equations and textual explanations, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes We evaluate our proposed model on six datasets with different scales and properties, including Cora, Citeseer, Pubmed (Sen et al. 2008), Twitch (Rozemberczki and Sarkar 2021), Arxiv (Hu et al. 2020), and Elliptic (Pareja et al. 2020).
Dataset Splits Yes For each dataset, we randomly split the ID data into 50%/25%/25% proportions for training, validation, and testing.
Hardware Specification Yes OOM indicates an out-of-memory error on a GPU with 24GB memory.
Software Dependencies No The paper does not mention any specific software names with version numbers (e.g., PyTorch, TensorFlow, Python version) that were used in the experiments.
Experiment Setup Yes For each dataset, we randomly split the ID data into 50%/25%/25% proportions for training, validation, and testing. Following existing work, we use accuracy to evaluate performance on Cora, Citeseer, Pubmed, and Arxiv, and use ROC-AUC and macro F1 score as metrics for Twitch and Elliptic, respectively. Additionally, we conduct five trials with different initializations. ... In this section, we study the impact of three hyperparameters, including the number of pseudo environments K, the trade-off factor λ, and the temperature coefficient τ in the Gumbel-Softmax. ... Cora, Citeseer, and Pubmed achieve the best value at 0.008, 0.08, and 0.3, respectively. Regarding τ, in most cases, the best performance is achieved when τ=1.