Graph Collaborative Expert Finding with Contrastive Learning

Authors: Qiyao Peng, Wenjun Wang, Hongtao Liu, Cuiying Huo, Minglai Shao

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on six CQA datasets demonstrate significant improvements compared with recent methods.Our experiments are conducted on real-world CQA datasets from Stack Exchange.Table 1 shows the performance w.r.t. ranking performance among the methods on six CQA datasets.
Researcher Affiliation Collaboration Qiyao Peng1 , Wenjun Wang*2,3,5 , Hongtao Liu4 , Cuiying Huo2 and Minglai Shao*1 1School of New Media and Communication, Tianjin University, Tianjin, China 2College of Intelligence and Computing, Tianjin University, Tianjin, China 3Georgia Tech Shenzhen Institute, Tianjin University, Guangdong, China 4Du Xiaoman Finicial Technology, Beijing, China 5Yazhou Bay Innovation Institute, Hainan Tropical Ocean University, Sanya Hainan, China
Pseudocode No The paper includes a figure (Figure 2) illustrating the model's workflow but does not provide any pseudocode or algorithm blocks.
Open Source Code No The paper states, "For models that have been open sourced (e.g, Ne Rank, PMEF, etc.), we directly use the public code for evaluation." This refers to other models, not the code for the authors' own method (CGEF). No specific link or statement about releasing their own source code is provided.
Open Datasets Yes Our experiments are conducted on real-world CQA datasets from Stack Exchange.https://archive.org/details/stackexchange.Table 2 summarizes comprehensive statistics for six datasets in detail.
Dataset Splits Yes Each dataset is partitioned into three distinct sets, namely a training set, a validation set, and a testing set. The allocation ratios for these sets are 80%, 10%, and 10% respectively, maintaining the chronological order.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU specifications, or memory used for the experiments.
Software Dependencies No The paper mentions using BERT, word2vec, and Adam optimizer but does not specify version numbers for these software components or any other libraries used.
Experiment Setup Yes The dimensions of the question and expert embeddings (d) were set to 100. The embedding dimension in the high-order connectivity encoder is set to 384. The model consisted of 3 graph attention layers, and the batch size of 128 was used. To mitigate overfitting, a dropout technique [Srivastava et al., 2014] was employed with a dropout ratio of 0.3. We employ the Adam [Kingma and Ba, 2015] to optimize our model, setting the learning rate to 0.001 and the weight decay to 0.0005. The interest-level replacing ratio λ is 0.3 and the behaviorlevel dropping ratio ρ is 0.25. The temperature τ is 0.1.