Explainable Models with Consistent Interpretations

Authors: Vipin Pillai, Hamed Pirsiavash2431-2439

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform all our experiments on Image Net (Deng et al. 2009) and MS-COCO (Lin et al. 2014) datasets. Tables 1 and 2 show the results using the evaluation metrics from section 4.3 on the Image Net and MS-COCO datasets respectively.
Researcher Affiliation Academia Vipin Pillai, Hamed Pirsiavash University of Maryland, Baltimore County
Pseudocode No The paper describes its method verbally and mathematically but does not include a structured pseudocode block or algorithm.
Open Source Code Yes The code and models are publicly available.
Open Datasets Yes We perform all our experiments on Image Net (Deng et al. 2009) and MS-COCO (Lin et al. 2014) datasets.
Dataset Splits Yes For evaluation, we use the validation set of 50k images for Image Net and 40k images for MS-COCO dataset.
Hardware Specification Yes We use Py Torch (Paszke et al. 2019) along with Nvidia Titan RTX and 2080Ti GPUs for training and evaluating our models.
Software Dependencies No The paper mentions using Py Torch but does not provide specific version numbers for PyTorch or any other software libraries or dependencies used in the experiments.
Experiment Setup Yes For training the models on the Image Net dataset, we use SGD with a learning rate of 0.1 for Res Net18 and 0.01 for Alex Net decayed by 0.1 every 30 epochs. We set the λ hyperparameter in Eq (5) to 25 for the Image Net experiments and 1 for the MS-COCO experiments respectively.