Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Online Sufficient Dimension Reduction Through Sliced Inverse Regression

Authors: Zhanrui Cai, Runze Li, Liping Zhu

JMLR 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 3, we demonstrate the numerical performance of our proposed procedure through simulations and several benchmark datasets available from the UCI machine learning repository. We provide some concluding remarks in Section 4. All proofs are relegated to the Appendix. 3. Numerical Validation In this section we evaluate the performance of our proposal through simulations. Throughout we consider the following three models... Estimation Accuracy: We first compare the estimation accuracy of the above competitors... The simulation results are summarized in Table 1.
Researcher Affiliation Academia Zhanrui Cai EMAIL Runze Li EMAIL Department of Statistics The Pennsylvania State University University Park, PA 16802, USA Liping Zhu EMAIL Center for Applied Statistics Institute of Statistics and Big Data Renmin University of China Beijing 100872, China
Pseudocode Yes Algorithm 1 Online sliced inverse regression via the perturbation method ... Algorithm 2 online sliced inverse regression via the gradient descent optimization
Open Source Code No The paper does not provide any explicit statement about releasing code, a link to a code repository, or mention of code in supplementary materials.
Open Datasets Yes In Section 3, we demonstrate the numerical performance of our proposed procedure through simulations and several benchmark datasets available from the UCI machine learning repository. In particular, the housing data is available at http://lib.stat.cmu.edu/datasets/boston, the abalone male and abalone female data sets are available at http://archive.ics.uci.edu/ml, and the ozone data set is available at https://www.stat.umn.edu/arc/software.html.
Dataset Splits Yes To compare the prediction accuracy of all online and batch learners (M1)-(M6). Towards this goal, we randomly select 75% of the observations as a training set and the remaining 25% as a test set.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory amounts) used for running the experiments. Table 3 refers to memory constraints for data processing, not hardware specifications for computation.
Software Dependencies Yes We use the SVM algorithm implemented in the R package e1071 (Meyer et al., 2017) to learn classifiers and build up regression models.
Experiment Setup No The paper describes the algorithms and prediction models used (SVM, classification tree, LDA, GLM, RF) but does not provide specific hyperparameter values, training configurations, or system-level settings for these models.