GeniePath: Graph Neural Networks with Adaptive Receptive Paths

Authors: Ziqi Liu, Chaochao Chen, Longfei Li, Jun Zhou, Xiaolong Li, Le Song, Yuan Qi4424-4431

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the proposed ASCW algorithm on three imbalanced measures, i.e., F-measure, AUROC, and AUPRC and compare with various online learning and feature selection methods.
Researcher Affiliation Academia Yanbin Liu,1,2 Yan Yan,2 Ling Chen,2 Yahong Han,3 Yi Yang2 1SUSTech-UTS Joint Centre of CIS, Southern University of Science and Technology 2Centre for Artificial Intelligence, University of Technology Sydney 3College of Intelligence and Computing, Tianjin University
Pseudocode Yes Algorithm 1 Imbalanced sparse CW in online-batch manner
Open Source Code No The paper does not provide any explicit statement about releasing the source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We conduct experiments on three widely-used highdimensional benchmarks and sample with different ratios to construct nine imbalance configurations, as shown in Table 1.
Dataset Splits No The paper mentions 'training data' and 'batch size' for online learning, but it does not specify explicit train/validation/test dataset splits by percentages or sample counts, nor does it refer to predefined splits in a way that allows reproduction of data partitioning.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for conducting the experiments.
Software Dependencies No The paper mentions tools and libraries used for comparison algorithms (e.g., Liblinear), but it does not specify the version numbers for the software or libraries used in their own implementation or experiments.
Experiment Setup Yes Batch Size. In Algorithm 1, µ is updated in a pure online manner and Σ is updated in an online-batch manner. To explain the necessity of the online-batch update and explore proper batch size, we perform experiments on news20 with various batch sizes, as shown in Table 2. The best performance is achieved with batch size=1 (the strict online case). However, the time cost is unbearable. The performance of batch size=256 is close to that of 64, but 256 is 3 4 times faster. We thus set batch size=256 in remaining experiments.