Enhancing Job Recommendation through LLM-Based Generative Adversarial Networks
Authors: Yingpeng Du, Di Luo, Rui Yan, Xiaopei Wang, Hongzhi Liu, Hengshu Zhu, Yang Song, Jie Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three large real-world recruitment datasets demonstrate the effectiveness of our proposed method. |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Engineering, Nanyang Technological University, Singapore 2Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 3School of Languages and Communication Studies, Beijing Jiaotong University, Beijing, China 4School of Software and Microelectronics, Peking University, Beijing, China 5Career Science Lab, BOSS Zhipin, Beijing, China 6NLP Center, BOSS Zhipin, Beijing, China |
| Pseudocode | No | The paper includes mathematical formulas and an architecture diagram (Figure 2) but does not present any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a specific repository link or an explicit statement about releasing the source code for the methodology. |
| Open Datasets | No | We evaluated the proposed method on three real-world data sets, which were provided by a popular online recruiting platform. |
| Dataset Splits | Yes | We spitted the interaction records into training, validation, and test sets equally. |
| Hardware Specification | No | The paper does not mention any specific GPU models, CPU models, or other hardware specifications used for running the experiments. |
| Software Dependencies | No | We adopted the Chat GLM-6B (Du et al. 2022) as the LLM model in this paper. For a fair comparison, all methods were optimized by the Adam W optimizer... |
| Experiment Setup | Yes | all methods were optimized by the Adam W optimizer with the same latent space dimension (i.e., 64), batch size (i.e., 1024), learning rate (i.e., 5 10 5), and regularization coefficient (i.e., 1 10 4). We set d = 768, de = 128, de = 64, and dc = ds = dg = 256 for the proposed method. |