Unraveling the Impact of Heterophilic Structures on Graph Positive-Unlabeled Learning
Authors: Yuhao Wu, Jiangchao Yao, Bo Han, Lina Yao, Tongliang Liu
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments across a variety of datasets have shown that GPL significantly outperforms baseline methods, confirming its effectiveness and superiority. (Abstract) and 5. Experiment (Section title) |
| Researcher Affiliation | Collaboration | 1Sydney AI Center, The University of Sydney 2CMIC, Shanghai Jiao Tong University 3Shanghai AI Laboratory 4TMLR Group, Department of Computer Science, Hong Kong Baptist University 5University of New South Wales. |
| Pseudocode | Yes | Algorithm 1 Algorithm flow of GPL. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code availability. |
| Open Datasets | Yes | Datasets We evaluate the performance of our method on various real-world datasets, each characterized by an edge homophily ratio h ranging from strong homophily to strong heterophily, as defined in Zhu et al. (2020). We have summarized the dataset details in Table 1. To transform these datasets into binary classification tasks, we follow the previous approach (Yoo et al., 2021; Yang et al., 2023)... |
| Dataset Splits | No | The paper mentions training and test data, but does not explicitly describe a separate validation set or its role in hyperparameter tuning or model selection. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'GCN' as the backbone model, but does not specify version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | In the GPL, the backbone model is a graph convolutional network (GCN), with the number of layers set to 2 and the size of hidden layers set to 16. We train each model using Adam optimizer with a learning rate of 0.01. |