Nonlinear Sufficient Dimension Reduction with a Stochastic Neural Network
Authors: SIQI LIANG, Yan Sun, Faming Liang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on real-world classification and regression problems, we show that the proposed method compares favorably with the existing state-of-the-art sufficient dimension reduction methods and is computationally more efficient for large-scale data. |
| Researcher Affiliation | Academia | Siqi Liang Purdue University West Lafayette, IN 47906 liang257@purdue.edu Yan Sun Purdue University West Lafayette, IN 47907 sun748@purdue.edu Faming Liang Purdue University West Lafayette, IN 47907 fmliang@purdue.edu |
| Pseudocode | Yes | Algorithm 1: An Adaptive SGHMC algorithm for training Sto Net |
| Open Source Code | No | The paper references open-source code for *other* methods (e.g., [21], [23], sliced package, sklearn) but does not provide a link or explicit statement about releasing the source code for their proposed Sto Net method. |
| Open Datasets | Yes | The dataset, relative location of CT slices on axial axis 6, contains 53,500 CT images collected from 74 patients. [...] 6This dataset can be downloaded from UCI Machine Learning repository. |
| Dataset Splits | Yes | The hyperparameters of these methods were determined with 5-fold cross-validation in terms of misclassification rates. |
| Hardware Specification | No | The CPU time (in seconds) was recorded on a computer of 2.2 GHz. |
| Software Dependencies | No | The paper mentions software packages like 'R package RCIT', 'sklearn', and 'sliced' but does not provide specific version numbers for these or any other software dependencies crucial for reproducibility of their own method's experiments. |
| Experiment Setup | No | Refer to the supplement for the parameter settings used in the experiments. |