CoRTX: Contrastive Framework for Real-time Explanation

Authors: Yu-Neng Chuang, Guanchu Wang, Fan Yang, Quan Zhou, Pushkar Tripathi, Xuanting Cai, Xia Hu

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three real-world datasets further demonstrate the efficiency and efficacy of our proposed Co RTX framework.
Researcher Affiliation Collaboration 1Rice University, 2Meta Platforms, Inc. {ynchuang, guanchu.wang, fyang, xia.hu}@rice.edu, {quanz, pushkart, caixuanting}@fb.com
Pseudocode Yes Algorithm 1: Real-Time Explainer Training with Co RTX
Open Source Code Yes Our source code is available at: https://github.com/ynchuang/Co RTX-720
Open Datasets Yes Our experiments consider two tabular datasets: Census (Dua & Graff, 2017) with 13 features, Bankruptcy (Liang et al., 2016) with 96 features, and one image dataset: CIFAR-10 (Krizhevsky et al., 2009) with 32 x 32 pixels.
Dataset Splits Yes Census Income: A collection of human social information with 26048 samples for training and validating; and 6513 samples for testing. Bankruptcy: A financial dataset contains 5455 samples of companies in the training set and validating set; and 1364 instances for the testing set. CIFAR-10: An image dataset with 60000 images in 10 different classes, where each image is composed of 32 x 32 pixels. We follow the original dataset division on training, validating, and testing process.
Hardware Specification Yes Device Attribute Value Computing infrastructure GPU GPU model Nvidia-A40 GPU number 1 GPU Memory 46068 MB
Software Dependencies No The paper mentions 'Deep CTR || package (Shen, 2017)' and 'Captum' as tools used, but does not provide specific version numbers for these or other key software components used in their implementation of CoRTX, beyond a publication year for DeepCTR's description.
Experiment Setup Yes Table 2: Hyper-parameters and model structures settings in Co RTX. The explanation encoder and the explanation head are designed with the model structures and are learned with the hyper-parameters in Table 2.