Towards Automating Model Explanations with Certified Robustness Guarantees

Authors: Mengdi Huai, Jinduo Liu, Chenglin Miao, Liuyi Yao, Aidong Zhang6935-6943

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also conduct extensive experiments on real-world datasets to verify the desirable properties of the proposed method.Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed interpretation method.
Researcher Affiliation Collaboration Mengdi Huai1, Jinduo Liu2, Chenglin Miao3, Liuyi Yao4, Aidong Zhang1 1 University of Virginia 2 Beijing University of Technology 3 University of Georgia 4 Alibaba Group
Pseudocode No No clearly labeled pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to code repositories.
Open Datasets Yes Here we adopt three image datasets: the MNIST (Le Cun et al. 1998), CIFAR-10 (Recht et al. 2018), and AT&T (Chopra, Hadsell, and Le Cun 2005) datasets.
Dataset Splits Yes Table 1: The statistic information of the adopted datasets. ... #Training 55,000 #Validation 5,000 #Testing 10,000
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup No Due to space limitations, the parameter settings, the description of the network architectures and more experiment results will be given in the full version of the paper.