Hypergraph Label Propagation Network

Authors: Yubo Zhang, Nan Wang, Yufeng Chen, Changqing Zou, Hai Wan, Xinbin Zhao, Yue Gao6885-6892

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the effectiveness of our proposed HLPN method on a real-world microblog dataset gathered from Sina Weibo. Experiments demonstrate that the proposed method can significantly outperform the state-of-the-art methods and alternative approaches.
Researcher Affiliation Collaboration 1BNRist, KLISS, School of Software, Tsinghua University, China 2Huawei Noah s Ark Lab
Pseudocode Yes Algorithm 1 Hypergraph Label Propagation Network
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes To evaluate the performance of the proposed method, we have conducted experiments on a dataset from the Sina Weibo platform (www.weibo.com) (Ji et al. 2018).
Dataset Splits Yes For generalization, our experiments are conducted with 10 fold cross-validation, randomly selecting 4650 samples as a training set, 400 samples as a validation set and 500 sample as a testing set, and we compare the average performance on 10-fold the results of state-of-the-art methods for fair evaluation.
Hardware Specification No The paper discusses time-cost comparisons, implying computational resources were used, but does not specify any particular hardware components like CPU or GPU models, or memory specifications.
Software Dependencies No The paper mentions using a 'ANP detector library named Senti Bank' but does not provide any version numbers for this or any other software dependencies, nor does it list programming languages or libraries with versions.
Experiment Setup Yes The proposed network architecture is shown in Fig. 2. We first employ a feature extraction module which contains bag-of-words dictionaries of different modalities... In terms of the embedding module, we adopt multiple three-layer perceptions abbreviated as fθm... For simplicity, the number of neurons in the three layers of all modalities is set as 64, 32 and 16, as shown in the right of Fig. 3... Hyperparameters including the depth and width of Multiple Layer Perception in the feature embedding module and scale embedding module, batchsize N , and the number of nearest neighbors k in constructing the hypergraph in the hypergraph construction mudule. ... we set the size of to-predict set Bu in one batch as 500, same as that of the testing set. For the scale of labeled set Bl in one batch, we just set the size to 1500.