Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
LMLFM: Longitudinal Multi-Level Factorization Machine
Authors: Junjie Liang, Dongkuan Xu, Yiwei Sun, Vasant Honavar4811-4818
AAAI 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present results of experiments with both simulated and real-world longitudinal data which show that LMLFM outperforms the state-of-the-art methods in terms of predictive accuracy, variable selection ability, and scalability to data with large number of variables. |
| Researcher Affiliation | Academia | Junjie Liang,1,2 Dongkuan Xu,2 Yiwei Sun,1,3 Vasant Honavar1,2,3,4 1Arti๏ฌcial Intelligence Research Laboratory, Pennsylvania State University 2College of Information Sciences and Technology, Pennsylvania State University 3Department of Computer Science and Engineering, Pennsylvania State University 4Institute of Computational and Data Sciences, Pennsylvania State University EMAIL, EMAIL |
| Pseudocode | No | The paper describes algorithms and updates (e.g., ICM algorithm, update equations for parameters) but does not provide a formal pseudocode block or algorithm listing. |
| Open Source Code | Yes | The code and supplemental material is available at https://github.com/junjieliang672/LMLFM. |
| Open Datasets | Yes | We construct simulated longitudinal data sets with 40 individuals and 40 observations per individual. We consider several choices of p from {50, 100, 500, 1000, 5000}. We consider three types of correlation, i.e., pure LC, pure CC and both (See supplemental material for details.). |
| Dataset Splits | Yes | Hyper-parameters are selected using cross validation on the training data. Evaluation scores are computed on the held-out data set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used (e.g., CPU, GPU models, memory). |
| Software Dependencies | No | The paper mentions software for baselines like 'lme4' and 'r PQL' (an R package) but does not specify version numbers for any software, including for their own implementation. |
| Experiment Setup | Yes | We use the same settings of hyper-parameters for LMLFM as in our experiments with simulated data. We exclude r PQL because it fails on all of the experiments due to memory issue. |