Explain Any Concept: Segment Anything Meets Concept-Based Explanation
Authors: Ao Sun, Pingchuan Ma, Yuanyuan Yuan, Shuai Wang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our evaluation over two popular datasets (Image Net and COCO) illustrate the highly encouraging performance of EAC over commonly-used XAI methods. |
| Researcher Affiliation | Academia | Ao Sun, Pingchuan Ma, Yuanyuan Yuan, and Shuai Wang The Hong Kong University of Science and Technology {asunac, pmaab, yyuanaq, shuaiw}@cse.ust.hk |
| Pseudocode | No | The paper includes a technical pipeline diagram (Figure 1) but does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Open Source. We publicly release and maintain EAC under the following github page: https://github.com/Jerry00917/samshap. |
| Open Datasets | Yes | We evaluate EAC on two popular datasets, Image Net [38] and COCO [39] |
| Dataset Splits | Yes | We use the standard training/validation split for both datasets. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for experiments. |
| Software Dependencies | No | The paper does not specify version numbers for key software components or libraries used in the experiments. |
| Experiment Setup | Yes | The only hyper-parameter considered in EAC is when fitting the PIE Scheme, i.e. a simple linear neural network learning scheme and the Monte Carlo (MC) sampling: we set lr = 0.008 and the number of MC sampling as 50000 throughout all experiments. |