Co-Saliency Detection Within a Single Image
Authors: Hongkai Yu, Kang Zheng, Jianwu Fang, Hao Guo, Wei Feng, Song Wang
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the experiment, we collect a new dataset of 364 color images with within-image cosaliency. Experiment results show that the proposed method can better detect the within-image co-saliency than existing algorithms. |
| Researcher Affiliation | Academia | 1 School of Computer Science and Technology, Tianjin University, Tianjin, China 2 Department of Computer Science and Engineering, University of South Carolina, Columbia, SC 3 Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University, Xi an, China 4 School of Electronic and Control Engineering, Chang an University, Xi an, China |
| Pseudocode | Yes | Algorithm 1 Co-saliency detection within a single image. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | No | Therefore, we collect a new image dataset, consisting of 364 color images. Each image shows certain level of withinimage co-saliency, e.g., the presence of multiple instances of the same object class with very similar appearance. |
| Dataset Splits | No | The paper mentions evaluating on the full dataset (364 images) and on subsets like 'easy' (299 images) and 'challenging' (65 images), but it does not specify explicit train/validation/test dataset splits needed for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using the "CVX convex optimization toolbox" but does not specify its version number, nor does it list other software dependencies with specific version numbers. |
| Experiment Setup | Yes | In our experiment, we generate M = 100 object proposals. The number of proposal groups is set to N = 10. The number of proposals in each group is set to K = 2. We set the balance factors λ = 0.01 in Eq. (4) and β = 0.05 in Eq. (6). The number of clusters is set to Z = 6 in the Kmeans algorithm. The initial within-image saliency map h(X) is computed using the algorithm developed in (Li and Yu 2016). Starting from the initial saliency map h(X), we first threshold this saliency map by a threshold (0.2 in our experiments) |