Convolutional 2D LDA for Nonlinear Dimensionality Reduction

Authors: Qi Wang, Zequn Qin, Feiping Nie, Yuan Yuan

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results on several datasets show that the proposed method performs better than other state-of-the-art methods in terms of classification accuracy. In this section, we compare the proposed convolutional 2D LDA with eight traditional algorithms...
Researcher Affiliation Academia 1School of Computer Science and Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xian 710072, Shaanxi, P. R. China 2Unmanned System Research Institute (USRI), Northwestern Polytechnical University, Xian 710072, Shaanxi, P. R. China
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the paper.
Open Source Code No The paper does not include an unambiguous statement or link for the release of open-source code for the described methodology.
Open Datasets Yes We use three datasets to conduct our experiments. MNIST dataset: The MNIST dataset contains 60,000 examples used for handwritten digits recognition. CVL dataset: The CVL dataset [Diem et al., 2013] is generated for the ICDAR2013 Handwritten Digit Recognition Competition. USPS dataset: The USPS dataset is used for light weight handwritten digit recognition.
Dataset Splits No The paper mentions using 80% training data, 20 training data, and 10 training data for experiments, but does not specify a separate validation dataset split or provide full train/validation/test percentages.
Hardware Specification No No specific hardware details (like GPU models, CPU types, or memory amounts) used for running experiments were mentioned.
Software Dependencies No The paper mentions 'Tensor Flow' but does not provide specific version numbers for TensorFlow or any other software dependencies.
Experiment Setup Yes The batch size is set to 100 and the learning rate to 0.0002. The regularization weight γ is set to 0.0001. The our method+ in Table 6 and Table 7 uses learning rate of 0.0004 and 0.0006 respectively.