Shaping Noise for Robust Attributions in Neural Stochastic Differential Equations

Authors: Sumit Kumar Jha, Rickard Ewetz, Alvaro Velasquez, Arvind Ramanathan, Susmit Jha9567-9574

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on the Image Net dataset in Table 1 and show that the attributions computed over such Neural SDEs with attribution-driven noise are consistently more robust to input perturbations
Researcher Affiliation Collaboration 1 Computer Science Department, University of Texas at San Antonio, TX 78249 2 Electrical and Computer Engineering Department, University of Central Florida, Orlando, FL 32816 3 Information Directorate, Air Force Research Laboratory, Rome, NY 13441 4 Data Science and Learning, Argonne National Laboratory, Lemont, IL, 60439 5 Computer Science Laboratory, SRI International, Menlo Park, CA, 94709
Pseudocode No The paper does not contain pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper does not provide any statement about releasing open-source code or a link to a code repository.
Open Datasets Yes We evaluate our approach on the Image Net dataset in Table 1 and show that the attributions computed over such Neural SDEs with attribution-driven noise are consistently more robust to input perturbations
Dataset Splits No The paper mentions '1,000 random images from the Image Net validation data set' but does not specify the overall training/validation/test split percentages or sample counts for all splits to allow reproduction of the data partitioning.
Hardware Specification Yes Our stochastic training and attribution analysis were performed on the Image Net benchmark using 8 A100 GPUs with 40GB RAM.
Software Dependencies No The paper mentions 'Res Net-50 model implemented in Pytorch' but does not specify the version number for PyTorch or any other software dependencies.
Experiment Setup Yes Image Net training was performed using a learning rate of 0.0001 with a Reduce LROn Plateau scheduler, the noise constant σ = 0.5, and the Adam optimizer.