Multi-scale Graph Fusion for Co-saliency Detection

Authors: Rongyao Hu, Zhenyun Deng, Xiaofeng Zhu7789-7796

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated our method on three benchmark data sets, compared to state-of-the-art co-saliency detection methods. Experimental results showed that our method outperformed all comparison methods in terms of different evaluation metrics.
Researcher Affiliation Academia 1Center for Future Media and School of Computer Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China 2School of Natural and Computational Science, Massey University Auckland Campus, New Zealand 3School of Computer Science, The University of Auckland, New Zealand
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We evaluated our method on three benchmark data sets, i.e., i Coseg, Cosal2015, and MSRC...The data set i Coseg (Batra et al. 2010)...The data set Cosal2015 (Zhang et al. 2016)...The data set MSRC (Winn, Criminisi, and Minka 2005)...we selected the data set MSRAB in (Liu et al. 2010) to train deep models.
Dataset Splits No The paper mentions using specific datasets for training and evaluation but does not provide explicit details about training, validation, or test splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification Yes All experiments were conducted on a server with 4 NVIDIA Quadro P4000 8G.
Software Dependencies No The paper mentions using the 'Adam optimizer (Kingma and Ba 2014)' but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages.
Experiment Setup Yes In our experiments, we reshaped the size of all images to 224 224 and set the number of superpixel regions as 5000...we set the maximal number of epochs as 10000 using the Adam optimizer (Kingma and Ba 2014), and set the initial learning rate and the weight decay, respectively, as 1e-5 and 0.005. We set stopping criterion as no decreasing of the objective function for 100 consecutive epochs in the training process.