GLDL: Graph Label Distribution Learning

Authors: Yufei Jin, Richard Gao, Yi He, Xingquan Zhu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For verification, four benchmark datasets with label distributions for nodes are created using common graph benchmarks. The experiments show that considering dependency helps learn better label distributions for networked data, compared to state-of-the-art LDL baseline.
Researcher Affiliation Academia Yufei Jin1, Richard Gao2, Yi He3, Xingquan Zhu1 1Dept. of Electrical Engineering & Computer Science, Florida Atlantic University, Boca Raton, FL 33431, USA 2Dept. of Computer Science, Rice University, Houston, TX 77005, USA 3Dept. of Computer Science, Old Dominion University, Norfolk, VA 23529, USA yjin2021@fau.edu; rdg3@rice.edu; yihe@cs.odu.edu; xzhu3@fau.edu
Pseudocode Yes Algorithm 1 reports detailed steps and static and dynamic network details. Algorithm 1: The GLDL algorithm
Open Source Code Yes Our code, benchmark data, and supplementary material are openly accessible at Git Hub1. 1https://github.com/Listener-Watcher/Graph-Distribution Learning.
Open Datasets Yes Our code, benchmark data, and supplementary material are openly accessible at Git Hub1. 1https://github.com/Listener-Watcher/Graph-Distribution Learning.
Dataset Splits No The paper uses benchmark datasets and mentions training/testing, but it does not explicitly provide specific details about training/validation/test splits (e.g., percentages or counts) or refer to standard predefined splits for reproducibility.
Hardware Specification No The paper does not specify the hardware used (e.g., CPU, GPU models, memory) for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup No The paper describes the model architecture and training objectives but does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings).