Regularized Modal Regression with Applications in Cognitive Impairment Prediction
Authors: Xiaoqian Wang, Hong Chen, Weidong Cai, Dinggang Shen, Heng Huang
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On the application side, we applied our model to successfully improve the cognitive impairment prediction using the Alzheimer s Disease Neuroimaging Initiative (ADNI) cohort data. |
| Researcher Affiliation | Academia | 1 Department of Electrical and Computer Engineering, University of Pittsburgh, USA 2School of Information Technologies, University of Sydney, Australia 3 Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA xqwang1991@gmail.com,chenh@mail.hzau.edu.cn tom.cai@sydney.edu.au,dinggang_shen@med.unc.edu,heng.huang@pitt.edu |
| Pseudocode | No | The paper describes the optimization algorithm in text and equations, but does not provide a formally labeled pseudocode block or algorithm steps in a structured format. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | Here we present the comparison results on six benchmark datasets from UCI repository [15] and Stat Lib2, which include: slumptest, forestfire, bolts, cloud, kidney, and lupus. ... [15] M. Lichman. UCI machine learning repository, 2013. 2http://lib.stat.cmu.edu/datasets/. Data used in this article were obtained from the ADNI database (adni. loni.usc.edu). |
| Dataset Splits | Yes | For evaluation, we calculate root mean square error (RMSE) between the predicted value and ground truth in out-of-sample prediction. We employ 2-fold cross validation and report the average performance for each method. For each method, we set the hyper-parameter of the regularization term in the range of {10 4, 10 3.5, . . . , 104}. We tune the hyper-parameters via 2-fold cross validation on the training data and report the best parameter w.r.t. RMSE of each method. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | The paper does not specify versions for any software dependencies, libraries, or programming languages used in the experiments. |
| Experiment Setup | Yes | For each method, we set the hyper-parameter of the regularization term in the range of {10 4, 10 3.5, . . . , 104}. We tune the hyper-parameters via 2-fold cross validation on the training data and report the best parameter w.r.t. RMSE of each method. For RMR methods, we adopt the Epanechnikov kernel and set the bandwidth as σ = max(|y w T x|). |