Valid P-Value for Deep Learning-driven Salient Region
Authors: Miwa Daiki, Vo Nguyen Le Duy, Ichiro Takeuchi
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the validity of the proposed method through numerical examples in synthetic and real datasets. Furthermore, we develop a Keras-based framework for conducting the proposed selective inference for a wide class of CNNs without additional implementation cost. We conducted experiments on synthetic and real-world datasets, through which we show that our proposed method can control the false positive rate, has good performance in terms of computational efficiency, and provides good results in practical applications. |
| Researcher Affiliation | Collaboration | Daiki Miwa Nagoya Institute of Technology miwa.daiki.mllab.nit@gmail.com Vo Nguyen Le Duy RIKEN duy.vo@riken.jp Ichiro Takeuchi Nagoya University and RIKEN ichiro.takeuchi@mae.nagoya-u.ac.jp |
| Pseudocode | Yes | Algorithm 1 SI DNN Saliency |
| Open Source Code | Yes | Our code is available at https://github.com/takeuchi-lab/selective_inference_dnn_salient_region. |
| Open Datasets | Yes | We examined the brain image dataset extracted from the dataset used in Buda et al. (2019), which included 939 and 941 images with and without tumors, respectively. |
| Dataset Splits | No | The paper describes generating synthetic data and using a real-world dataset but does not specify a training/validation/test split for any machine learning model or experiment. It discusses setting parameters for statistical tests and generating images for FPR/TPR analysis, not data partitioning for model training and evaluation in the typical sense. |
| Hardware Specification | No | The paper does not specify any hardware components (e.g., CPU, GPU models, memory, or specific computing platforms) used for running the experiments. |
| Software Dependencies | No | The paper mentions developing a 'Keras-based framework' but does not provide specific version numbers for Keras or any other software dependencies. |
| Experiment Setup | Yes | In all experiments, we set = 0 in the mean null test and = 5 in the global null test. We set the significance level = 0.05. We used CAM as the saliency method in all experiments. More details (methods for comparison, network structure, etc.) can be found in the Appendix A.4. Appendix A.4 states: 'We used the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 0.001 and a batch size of 32 for 10 epochs'. |