Max-Mahalanobis Linear Discriminant Analysis Networks
Authors: Tianyu Pang, Chao Du, Jun Zhu
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test the proposed network on the widely used MNIST and CIFAR-10 datasets for both robustness to adversarial attacks and classification accuracy. As for robustness, we consider various adversarial attacking methods, and the results demonstrate that the MM-LDA network is indeed much more robust to adversarial examples than the SR networks, even when the SR networks are enhanced by the adversarial training methods. As for classification, we test the performance of the MM-LDA network on both classbiased and class-unbiased datasets. The results show that the MM-LDA networks can obtain higher accuracy on classbiased datasets while maintaining state-of-the-art accuracy on class-unbiased datasets. |
| Researcher Affiliation | Academia | Dept. of Comp. Sci. & Tech., BNRist Center, State Key Lab for Intell. Tech. & Sys., THBI Lab, Tsinghua University, Beijing, 100084, China. |
| Pseudocode | Yes | Algorithm 1 Generate Opt Means; Algorithm 2 The training phase for the MM-LDA network |
| Open Source Code | No | The paper does not provide any concrete access (link or explicit statement of release) to open-source code for the methodology described. |
| Open Datasets | Yes | We choose the widely used MNIST (Le Cun et al., 1998) and CIFAR-10 (Krizhevsky & Hinton, 2009) datasets. |
| Dataset Splits | Yes | Each dataset has 60,000 images, of which 50,000 are in the training set and the rest are in the test set... We empirically choose the value of C by doing 5-fold cross-validation on the training set. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It only mentions general computing environments indirectly. |
| Software Dependencies | No | The paper mentions 'the adaptive optimization method Adam (Kingma & Ba, 2015)', but it does not specify any software names with version numbers for reproducibility. |
| Experiment Setup | Yes | The number of training steps is 20,000 on MNIST and 90,000 on CIFAR-10 for both networks... When applying the MMLDA network, the only hyperparameter is the square norm C of the Gaussian means in MMD... we will set C = 100. |