LMLFM: Longitudinal Multi-Level Factorization Machine

Authors: Junjie Liang, Dongkuan Xu, Yiwei Sun, Vasant Honavar4811-4818

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present results of experiments with both simulated and real-world longitudinal data which show that LMLFM outperforms the state-of-the-art methods in terms of predictive accuracy, variable selection ability, and scalability to data with large number of variables.
Researcher Affiliation Academia Junjie Liang,1,2 Dongkuan Xu,2 Yiwei Sun,1,3 Vasant Honavar1,2,3,4 1Artificial Intelligence Research Laboratory, Pennsylvania State University 2College of Information Sciences and Technology, Pennsylvania State University 3Department of Computer Science and Engineering, Pennsylvania State University 4Institute of Computational and Data Sciences, Pennsylvania State University {jul672, dux19, vhonavar}@ist.psu.edu, yus162@psu.edu
Pseudocode No The paper describes algorithms and updates (e.g., ICM algorithm, update equations for parameters) but does not provide a formal pseudocode block or algorithm listing.
Open Source Code Yes The code and supplemental material is available at https://github.com/junjieliang672/LMLFM.
Open Datasets Yes We construct simulated longitudinal data sets with 40 individuals and 40 observations per individual. We consider several choices of p from {50, 100, 500, 1000, 5000}. We consider three types of correlation, i.e., pure LC, pure CC and both (See supplemental material for details.).
Dataset Splits Yes Hyper-parameters are selected using cross validation on the training data. Evaluation scores are computed on the held-out data set.
Hardware Specification No The paper does not provide specific details about the hardware used (e.g., CPU, GPU models, memory).
Software Dependencies No The paper mentions software for baselines like 'lme4' and 'r PQL' (an R package) but does not specify version numbers for any software, including for their own implementation.
Experiment Setup Yes We use the same settings of hyper-parameters for LMLFM as in our experiments with simulated data. We exclude r PQL because it fails on all of the experiments due to memory issue.