Knowledge-Aware Explainable Reciprocal Recommendation

Authors: Kai-Huang Lai, Zhe-Rui Yang, Pei-Yuan Lai, Chang-Dong Wang, Mohsen Guizani , Min Chen

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two real-world datasets from diverse scenarios demonstrate that the proposed model outperforms state-of-the-art baselines, while also delivering compelling reasons for recommendations to both parties.
Researcher Affiliation Collaboration 1School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 2South China Technology Commercialization Center, Guangzhou, China 3Machine Learning Department, Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi, UAE 4School of Computer Science and Engineering, South China University of Technology, Guangzhou, China 5Pazhou Lab, Guangzhou, China
Pseudocode No The paper describes its method verbally and mathematically but does not include any pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at: https://github.com/Allminer Lab/Codefor-KAERR-master.
Open Datasets Yes Online Recruitment. We use a dataset from the Aliyun Programming Competition on Person-Job Fitting1, provided by a large Chinese online recruitment platform, namely Zhaopin. For simplicity, the dataset is called Zhaopin. (1https://tianchi.aliyun.com/dataset/31623)
Dataset Splits No The paper mentions "Early stopping with a patience of 10 epochs is adopted to prevent overfitting" which implies the use of a validation set, but it does not specify the train/validation/test split percentages or sample counts.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper states: "We implement the baseline models using Rec Bole (Zhao et al. 2022) library." However, it does not provide specific version numbers for any software or libraries.
Experiment Setup Yes Hyper-parameters for all methods are tuned through grid search. The Adam optimizer is utilized for model training. The learning rate is selected from {0.01, 0.001, 0.0001} via tuning. Early stopping with a patience of 10 epochs is adopted to prevent overfitting. The parameter tuning results are shown in Figure 2. We study the impacts of three key hyper-parameters: maximum number of metapaths Lm, knowledge graph embedding size he, and λ in the loss function.