Your Career Path Matters in Person-Job Fit

Authors: Zhuocheng Gong, Yang Song, Tao Zhang, Ji-Rong Wen, Dongyan Zhao, Rui Yan

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the practical value of our proposed model, we conduct extensive experiments on realworld data extracted from an online recruitment platform and then present detailed cases to show how the career path matters in person-job fit.
Researcher Affiliation Collaboration 1 Wangxuan Institute of Computer Technology, Peking University 2 BOSS Zhipin 3 Gaoling School of Artificial Intelligence, Renmin University of China
Pseudocode No The paper includes a 'Model overview' diagram (Figure 2) that illustrates the architecture, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper includes a URL 'https://github.com/huggingface/transformers' which points to a third-party library used in the work, not the authors' own source code for the proposed WEPJM model. There is no explicit statement about releasing their specific implementation code.
Open Datasets No We build a dataset by collecting data from a realworld online recruiting platform. We anonymize all identity information to protect the privacy of job-seekers and recruiters.
Dataset Splits Yes Both the size of the validation set and the testing set are set to 3840 with 1920 positive samples and 1920 negative samples.
Hardware Specification No The paper mentions training models but does not specify any hardware details such as GPU models, CPU types, or memory.
Software Dependencies No The paper mentions using 'BERT-base' and 'Adam optimizer', and links to 'https://github.com/huggingface/transformers', but it does not provide specific version numbers for these or any other software dependencies required for reproducibility.
Experiment Setup Yes We use BERT-base to learn sentence-level representations. After encoding with BERT, the dimension of hidden states is set to 200. The batch size is set to 16. λCPI, λCPR and λCL are search in {0.1, 1}. The temperature hyperparameter is set to 1. The model is trained with Adam optimizer (Kingma and Ba 2014) with the learning rate initialized as 5e-4. The training will be early stopped if the evaluation results do not increase for 3 successive epochs.