De-biasing Covariance-Regularized Discriminant Analysis
Authors: Haoyi Xiong, Wei Cheng, Yanjie Fu, Wenqing Hu, Jiang Bian, Zhishan Guo
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on both synthetic datasets and real application datasets to confirm the correctness of our theoretical analysis and demonstrate the superiority of DBLD over classical FLD, CRLD and other downstream competitors under HDLSS settings. |
| Researcher Affiliation | Collaboration | Haoyi Xiong ,+, Wei Cheng , Yanjie Fu , , Wenqing Hu , Jiang Bian , , Zhishan Guo Baidu Inc., Beijing, China +National Engineering Laboratory of Deep Learning Technology and Application, Beijing, China Missouri University of Science and Technology, MO, United States NEC Laboratories America, NJ, United States |
| Pseudocode | Yes | Algorithm 1 DBLD Estimation Algorithm |
| Open Source Code | No | The paper does not provide an unambiguous statement of releasing its own source code or a direct link to a code repository for the methodology described. |
| Open Datasets | Yes | To validate our algorithms, we evaluate our algorithms on a synthesized dataset (imported from [Cai and Liu, 2011])... on the Web datasets [Lin, 2017]... We evaluate DBLD on the real-world Electronic Health Records (EHR) data... [Zhang et al., 2015]... using leukemia and colon cancer datasets (derived from [Lin, 2017; Tibshirani et al., 2002]) |
| Dataset Splits | Yes | For each settings, we repeat the experiments for 100 times and report the averaged results, in a cross-validation manner. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for experiments, such as GPU models, CPU specifications, or memory. |
| Software Dependencies | No | The paper mentions software like Graphical Lasso and SVM, but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | No | The paper states that algorithms were 'fine tuned with the best parameter λ' and describes the number of repetitions (e.g., '100 times'), but it does not explicitly list the specific hyperparameter values (e.g., exact λ values, learning rates, batch sizes) used for the experiments. |