Contrastive Graph Transformer Network for Personality Detection
Authors: Yangfu Zhu, Linmei Hu, Xinkai Ge, Wanrong Peng, Bin Wu
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on two standard datasets demonstrate that our CGTN outperforms the stateof-the-art methods for personality detection. |
| Researcher Affiliation | Academia | 1Beijing Key Laboratory of Intelligence Telecommunication Software and Multimedia, Beijing University of Posts and Telecommunications, Beijing, China 2Medical Psychological Center, the Second Xiangya Hospital, Central South University, Changsha, China |
| Pseudocode | No | No pseudocode or algorithm blocks found. |
| Open Source Code | Yes | The code available at https://github.com/yangpu06/CGTN |
| Open Datasets | Yes | Following previous studies, we conduct experiments on the Kaggle1 with MBTI taxnomy and Essays datasets with Big Five taxonomy. The Kaggle dataset is collected from Personality Cafe, where people share their personality types and daily communications, with a total of 8675 users and 45-50 posts for each user. The traits for Kaggle dataset, namely, MBTI taxonomy, include Introversion / Extroversion, Sensing / Ntuition, Think / Feeling, and Perception / Judging. The Essays [Pennebaker and King, 1999] is a well-known dataset of stream-of-consciousness texts which contains 2468 anonymous users with approximately 50 sentences recorded for each user. Each user is tagged with a binary label of the Big Five taxonomy, including Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism. |
| Dataset Splits | Yes | Two datasets are randomly divided into 6:2:2 for training, validation, and testing, respectively. |
| Hardware Specification | No | No specific hardware details (GPU/CPU models, memory amounts, or detailed computer specifications) are mentioned for running experiments. |
| Software Dependencies | No | No specific software dependencies with version numbers are mentioned (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | Yes | For pretraining, the initial learning rate is searched in { 1e 2, 1e 3, 1e 4} and to optimize the contrastive loss on different datasets. The mini-batch size is set as 64. The temperature τ is set as 0.15. We adopt early stopping when the validation loss stops decreasing by 10 epochs. For joint learning, we search the trade-off parameter λ in { 1, 0.1, 0.01, 0.001, 0.0001} for different datasets. The initial learning rate is also searched in { 1e 2, 1e 3, 1e 4}. The settings of batch size, patience for early stopping, and temperature are the same as the pretraining strategy. |