Memory-Aided Contrastive Consensus Learning for Co-salient Object Detection

Authors: Peng Zheng, Jie Qin, Shuo Wang, Tian-Zhu Xiang, Huan Xiong

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on all the latest Co SOD benchmarks demonstrate that our lite MCCL outperforms 13 cutting-edge models, achieving the new state of the art ( 5.9% and 6.2% improvement in S-measure on Co SOD3k and Co Sal2015, respectively).
Researcher Affiliation Collaboration 1 Nanjing University of Aeronautics and Astronautics, Nanjing, China 2 ETH Zurich, Zurich, Switzerland 3 Inception Institute of Artificial Intelligence, Abu Dhabi, UAE 4 Harbin Institute of Technology, China 5 Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the paper.
Open Source Code Yes Our source codes, saliency maps, and online demos are publicly available at https://github.com/Zheng Peng7/MCCL.
Open Datasets Yes We follow (Zhang et al. 2020b) to use DUTS class (Zhang et al. 2020c) and COCO-SEG (Wang et al. 2019) as our training sets.
Dataset Splits No The paper mentions training and testing sets, but does not explicitly provide details about a validation dataset split (e.g., percentages or sample counts for validation).
Hardware Specification Yes All the experiments are implemented based on the Py Torch library (Paszke et al. 2019) with a single NVIDIA RTX3090 GPU.
Software Dependencies No The paper mentions 'Py Torch library' but does not specify a version number or other software dependencies with versions.
Experiment Setup Yes batchsize = min(#group1, ..., #group N, 48), The images are resized to 256x256 for training and inference. Our MCCL is trained for 250 epochs with the Adam W optimizer (Loshchilov and Hutter 2019). The initial learning rate is 1e-4 and divided by 10 at the last 20th epoch.