Multi-Document Transformer for Personality Detection

Authors: Feifan Yang, Xiaojun Quan, Yunyi Yang, Jianxing Yu14221-14229

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed model on the Kaggle and Pandora MBTI datasets and the experimental results show that it compares favorably with baseline methods.
Researcher Affiliation Academia Feifan Yang, Xiaojun Quan*, Yunyi Yang, Jianxing Yu School of Data and Computer Science, Sun Yat-sen University, China {yangff6, yangyy37}@mail2.sysu.edu.cn, {quanxj3, yujx26}@mail.sysu.edu.cn
Pseudocode No The paper includes architectural diagrams (Figure 2 and Figure 3) and mathematical equations, but no structured pseudocode or algorithm blocks.
Open Source Code No No statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes Following previous studies (Hernandez and Knight 2017; Keh, Cheng et al. 2019; Gjurkovi c et al. 2020), we conduct experiments on the Kaggle2 and Pandora3 MBTI personality datasets. ... 2https://www.kaggle.com/datasnaek/mbti-type 3https://psy.takelab.fer.hr/datasets/all/
Dataset Splits Yes Then, we randomly split them into a 60-20-20 proportion for training, validation, and testing, respectively.
Hardware Specification Yes We use Pytorch (Paszke et al. 2019) to implement all the deep learning models on four 2080Ti GPU cards.
Software Dependencies No The paper mentions "Pytorch (Paszke et al. 2019)", "Adam (Kingma and Ba 2014) optimizer", "bert-base-cased (Devlin et al. 2018)" and "xlnet-base-cased (Yang et al. 2019b)", but only PyTorch has a parenthetical reference that *could* imply a version, but it's a citation, not an explicit version number. No other software versions are provided for reproducibility.
Experiment Setup Yes For training, we use the Adam (Kingma and Ba 2014) optimizer with an initial learning rate α = 2e-5 and a mini-batch size of 24. Following previous work, we set the max number of posts to 50 for each user and the max length to 70 for each post. ... The hidden size dt of our dimension attention module is set to 768.