A Joint Learning Approach to Intelligent Job Interview Assessment
Authors: Dazhong Shen, Hengshu Zhu, Chen Zhu, Tong Xu, Chao Ma, Hui Xiong
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on real-world data clearly validate the effectiveness of JLMIA, which can lead to substantially less bias in job interviews and provide a valuable understanding of job interview assessment. |
| Researcher Affiliation | Collaboration | Dazhong Shen1,2, Hengshu Zhu2, , Chen Zhu2, Tong Xu1,2, Chao Ma2, Hui Xiong1,2,3,4, 1Anhui Province Key Lab of Big Data Analysis and Application, University of S&T of China, 2Baidu Talent Intelligence Center, 3Business Intelligence Lab, Baidu Research, 4National Engineering Laboratory of Deep Learning Technology and Application, China. sdz@mail.ustc.edu.cn, {zhuhengshu, zhuchen02, machao13}@baidu.com, tongxu@ustc.edu.cn, xionghui@gmail.com |
| Pseudocode | Yes | Algorithm 1: The Generative Process of JLMIA for Resume and Interview Assessment |
| Open Source Code | No | The paper does not provide any link or explicit statement about releasing the source code for the methodology described. |
| Open Datasets | No | The data set used in the experiments is the historical recruitment data provided by a high-tech company in China, which contains total 14,702 candidate interview records. ... The paper describes using a proprietary dataset and does not provide any access information (link, DOI, or citation to a public source). |
| Dataset Splits | No | After that, we randomly selected 80% data for model training and the other 20% data for test. The paper specifies training and test splits, but no explicit validation split percentage or count is provided. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper discusses models and algorithms used (e.g., JLMIA, LDA, BM25) but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | In JLMIA, we empirically set fixed parameters {δ2, βJ, βR.βE} = {0.01, 0.1, 0.1, 0.1}. ... In particular, we set the parameters K = 10 and C = 2. ... In our algorithm, the parameters are empirically set as Rel = 5, Div = 20 and µ = 0.9. |