ACAT: Adversarial Counterfactual Attention for Classification and Detection in Medical Imaging
Authors: Alessandro Fontanella, Antreas Antoniou, Wenwen Li, Joanna Wardlaw, Grant Mair, Emanuele Trucco, Amos Storkey
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | ACAT increases the baseline classification accuracy of lesions in brain CT scans from 71.39% to 72.55% and of COVID-19 related findings in lung CT scans from 67.71% to 70.84% and exceeds the performance of competing methods. |
| Researcher Affiliation | Academia | 1School of Informatics, University of Edinburgh, Edinburgh, UK 2Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK 3VAMPIRE project / CVIP, Computing, School of Science and Engineering, University of Dundee, UK. |
| Pseudocode | Yes | Algorithm 1 ACAT training Data: D = (xi; i = 1, 2, . . . , ND) Train baseline classification network f and autoencoder D(E) on D Given E(xj) = zj, minimise: g(z) = L(d(z), t) + α||z E(xj)||L1 Decode the obtained latent vector to compute the counterfactual D(z ) Obtain saliency maps Sj from positive and negative counterfactuals Train ACAT on D using xj and Sj as input |
| Open Source Code | Yes | Code to reproduce the experiments can be accessed at at the following url: ACAT Git Hub repository. |
| Open Datasets | Yes | We performed our experiments on two datasets: IST-3 (Sandercock et al., 2011) and Mos Med (Morozov et al., 2020). More details about the data are provided in Appendix A. Further information about the trial protocol, data collection and the data use agreement can be found at the following url: IST-3 information. |
| Dataset Splits | Yes | Both datasets were divided into training, validation and test sets with a 70-15-15 split and three runs with different random seeds were performed. |
| Hardware Specification | Yes | All the networks were trained using 8 NVIDIA Ge Force RTX 2080 GPUs. |
| Software Dependencies | No | The paper states that experiments were run using '8 NVIDIA Ge Force RTX 2080 GPUs' but does not specify software dependencies like deep learning frameworks (e.g., TensorFlow, PyTorch) or their versions, nor CUDA versions. |
| Experiment Setup | Yes | The baseline models were trained for 200 epochs and then employed, together with an autoencoder trained to reconstruct the images, to obtain the saliency maps that are needed for our framework. Our framework and the competing methods were fine-tuned for 100 epochs, starting from the weights of the baseline models. The hidden layer is followed by a leaky Re LU activation and dropout with p = 0.1. |