Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Mining Gaze for Contrastive Learning toward Computer-Assisted Diagnosis
Authors: Zihao Zhao, Sheng Wang, Qian Wang, Dinggang Shen
AAAI 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method using two representative types of medical images and two common types of gaze data. The experimental results demonstrate the practicality of Mc GIP, indicating its high potential for various clinical scenarios and applications. |
| Researcher Affiliation | Collaboration | 1School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, Shanghai Tech University, Shanghai, China 2School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China 3Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China 4Shanghai Clinical Research and Trial Center, Shanghai, China |
| Pseudocode | No | The paper describes methods through text and figures, but does not include explicit pseudocode or algorithm blocks labeled as such. |
| Open Source Code | Yes | The code implementation of our method is released at https://github.com/zhaozh10/Mc GIP. |
| Open Datasets | Yes | We conduct experiments on two datasets: INbreast (Moreira et al. 2012) and Tufts dental dataset (Panetta et al. 2021). |
| Dataset Splits | Yes | The Tufts dataset (Panetta et al. 2021) is composed of 1000 panoramic dental X-ray images, together with processed gaze heatmaps. We choose 70% and 10% of images for training and validation, while the remaining 20% of images constitute the testing set. |
| Hardware Specification | Yes | All the experiments are implemented with Py Torch 1.13.0 on a single NVIDIA RTX3060. |
| Software Dependencies | Yes | All the experiments are implemented with Py Torch 1.13.0 on a single NVIDIA RTX3060. |
| Experiment Setup | Yes | Unless otherwise specified, all networks are trained for 200 epochs using Adam optimizer with the learning rate (lr) set to 2e 5 in the pretraining. Fine-tuning and linear probing for final classification are trained for 10 epochs (INbreast) and 20 epochs (Tufts) with Adam optimizer (lr: 2e 5). All pre-training methods are initialized from Image Net pre-trained weights. |