Latent Sparse Modeling of Longitudinal Multi-Dimensional Data
Authors: Ko-Shin Chen, Tingyang Xu, Jinbo Bi
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Computational results on synthetic datasets and realfile f MRI and EEG problems demonstrate the superior performance of the proposed approach over existing techniques. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science and Engineering, University of Connecticut, Storrs, CT, USA ko-shin.chen@uconn.edu, jinbo.bi@uconn.edu 2 Tencent AI Lab, Shenzhen, China, tingyangxu@tencent.com |
| Pseudocode | Yes | Algorithm 1 Search for optimal ˆΦ |
| Open Source Code | No | The paper does not provide any links to open-source code or explicitly state that the code for their methodology is released. |
| Open Datasets | Yes | The f MRI data used in the experiment were collected by the Alzheimer s Disease Neuroimaging Initiative (ADNI)1. 1http://adni.loni.usc.edu/ |
| Dataset Splits | Yes | We randomly select 80% of the subjects for training and the rest for testing...The λ1, λ2, and λ3 were tuned in a two-fold cross validation. In other words, the training records were further split into half: one used to build a model with a chosen parameter value from a range of 1 to 20 with a step size of 0.1; and the other used to test the resultant model. |
| Hardware Specification | No | The paper does not explicitly describe the hardware (e.g., specific CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | In our experiments, λ s are tuned as λ1 = λ2 = λ3 = 0.3 based on cross validation within training...The λ1, λ2, and λ3 were tuned in a two-fold cross validation. In other words, the training records were further split into half: one used to build a model with a chosen parameter value from a range of 1 to 20 with a step size of 0.1; and the other used to test the resultant model...The hyperparameters λ1, λ2, and λ3 in our approach and GEE/PGEE (one parameter) were tuned in a two-fold cross validation within the training data. |