Idiographic Personality Gaussian Process for Psychological Assessment

Authors: Yehu Chen, Muchen Xi, Joshua Jackson, Jacob Montgomery, Roman Garnett

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using synthetic and real data, we show that IPGP improves both prediction of actual responses and estimation of individualized factor structures relative to existing benchmarks. In a third study, we show that IPGP also identifies unique clusters of personality taxonomies in real-world data, displaying great potential to advance individualized approaches to psychological diagnosis and treatment. We conducted an extensive simulation study comparing IPGP against benchmark methods and analyzed an existing cross-sectional personality dataset. Our results demonstrate that IPGP simultaneously enhances the estimation of idiographic taxonomies and improves the prediction of responses. Additionally, we collected a novel IRB-approved longitudinal dataset. When applied to this data, IPGP not only shows superior performance in response prediction, but also suggests unique personality taxonomies.
Researcher Affiliation Academia Yehu Chen, Muchen Xi, Jacob Montgomery Joshua Jackson, Roman Garnett Washington University in St Louis chenyehu,m.xi,j.jackson,jacob.montgomery,garnett@wustl.edu
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes The data and code to reproduce the main experimental results of this paper are uploaded as supplementary materials.
Open Datasets Yes We utilize an existing dataset called life outcomes of personality replication (LOOPR) [50], which is collected from 5,347 unique participants on the Big Five Inventory [51] consisting of 60 questions.
Dataset Splits Yes Finally, we generate the yijts according to the ordered probit model with C = 5 levels, and apply 80%/20% splitting for training and testing. For the forecasting task, we train both models with data from the first 40 days and predict future responses for the last 5 days. For the cross-validation task, we predict responses of each trait by training on data belonging to the other four traits, where 20% of responses for one trait was held out (randomly choosing the trait and items to remove).
Hardware Specification Yes We repeat our simulation with 25 different random seeds using 300 cores on Intel Xeon 2680 CPUs.
Software Dependencies No We use 100 inducing points and the ADAM optimizer with learning rate 0.05 to optimize ELBO for 10 epoches with batch size of 256. While an optimizer is named, no version numbers for this or any other software dependencies (e.g., programming languages, libraries, frameworks) are provided.
Experiment Setup Yes We use 100 inducing points and the ADAM optimizer with learning rate 0.05 to optimize ELBO for 10 epoches with batch size of 256.