Bagging Ensembles for the Diagnosis and Prognostication of Alzheimer’s Disease
Authors: Peng Dai, Femida Gwadry-Sridhar, Michael Bauer, Michael Borrie
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive comparison is made against Support Vector Machines (SVM), Random Forest (RF), Decision Tree (DT) and Random Subspace (RS) methods. Experimental results show that our proposed algorithm yields superior results when compared to the other methods, suggesting promising robustness for possible clinical applications. |
| Researcher Affiliation | Academia | Peng Dai, Femida Gwadry-Sridhar, Michael Bauer Department of Computer Science University of Western Ontario and Robarts Research London, ON, Canada {pdai5, fgwadrys, bauer}@uwo.ca Michael Borrie, for the ADNI Division of Geriatric Medicine University of Western Ontario London, ON, Canada michael.borrie@sjhc.london.on.ca |
| Pseudocode | No | The paper describes the methodology conceptually and with a schematic overview (Figure 1), but it does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about releasing source code for the described methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | Data used in preparation of this article were obtained from the Alzheimers Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. |
| Dataset Splits | No | After removing invalid entries, there are 2158 data points, with 586 normal records, 1006 MCI records, and 416 AD records. We randomly select 50 normal, 50 CI, and 50 AD data points as the testing set. The rest (i.e. 536 normal, 956 MCI, 366 AD) are left as the training set (2008 training points and 150 testing points). |
| Hardware Specification | No | The paper mentions the hardware used for data acquisition (e.g., '1.5 T GE Signa scanner'), but it does not specify any hardware details (e.g., CPU, GPU models, memory) used for running the computational experiments or training the models. |
| Software Dependencies | No | The paper mentions software like 'CIVET' and 'CBRAIN platform' for data processing, but it does not provide specific version numbers for these or any other software dependencies, libraries, or frameworks used for model implementation or experimentation. |
| Experiment Setup | Yes | In our present implementation, we utilize bagging with decision trees for classification. ... The optimal result is achieved at dimension 35. ... It can be seen that with more than 140 trees the experimental results converge. |