A Causal View on Robustness of Neural Networks
Authors: Cheng Zhang, Kun Zhang, Yingzhen Li
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Compared to DNNs, experiments on both MNIST and a measurement-based dataset show that our model is significantly more robustness to unseen manipulations. |
| Researcher Affiliation | Collaboration | Cheng Zhang Microsoft Research Cheng.Zhang@microsoft.com Kun Zhang Carnegie Mellon University kunz1@cmu.edu Yingzhen Li Microsoft Research Yingzhen.Li@microsoft.com |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (e.g., sections explicitly labeled "Algorithm" or "Pseudocode"). |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code or a link to a code repository. |
| Open Datasets | Yes | We evaluate the robustness of deep CAMA for image classification using both MNIST and a binary classification task derived from CIFAR-10 |
| Dataset Splits | No | The paper discusses training and testing, but does not provide specific details on validation dataset splits (e.g., percentages, sample counts, or a clear methodology for validation splits). |
| Hardware Specification | No | The paper mentions "Nathan Jones for his support with computing infrastructure" but does not specify any particular hardware details such as GPU models, CPU types, or memory amounts used for the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used (e.g., Python, PyTorch, TensorFlow, etc.). |
| Experiment Setup | No | The paper describes general training scenarios (clean data, augmented data) and fine-tuning, but does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. |