CoADNet: Collaborative Aggregation-and-Distribution Networks for Co-Salient Object Detection
Authors: Qijian Zhang, Runmin Cong, Junhui Hou, Chongyi Li, Yao Zhao
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed Co ADNet is evaluated on four prevailing Co SOD benchmark datasets, which demonstrates the remarkable performance improvement over ten state-of-the-art competitors. |
| Researcher Affiliation | Academia | 1Department of Computer Science, City University of Hong Kong, Hong Kong SAR, China 2Institute of Information Science, Beijing Jiaotong University, China 3Beijing Key Laboratory of Advanced Information Science and Network Technology, China 4School of Computer Science and Engineering, Nanyang Technological University, Singapore |
| Pseudocode | No | The paper contains architectural diagrams (e.g., flowcharts) but no clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | https://rmcong.github.io/proj_Co ADNet.html and We also provide a Pytorch implementation version |
| Open Datasets | Yes | In experiments, we conduct extensive evaluations on four popular datasets, including Co SOD3k [15], Cosal2015 [53], MSRC [47], and i Coseg [2]. and In each training iteration, 24 sub-groups from the co-saliency dataset COCO-SEG [43] and 64 samples from the saliency dataset DUTS [44] are simultaneously fed into the network |
| Dataset Splits | No | The paper mentions datasets used for evaluation (test sets) and training, but does not explicitly describe a separate validation set or split for model tuning. |
| Hardware Specification | Yes | The proposed framework is implemented in Mind Spore and accelerated by 4 Tesla P100 GPUs 3. |
| Software Dependencies | No | The proposed framework is implemented in Mind Spore and accelerated by 4 Tesla P100 GPUs 3. We also provide a Pytorch implementation version. (No specific version numbers are provided for Mind Spore or Pytorch). |
| Experiment Setup | Yes | In each training iteration, 24 sub-groups from the co-saliency dataset COCO-SEG [43] and 64 samples from the saliency dataset DUTS [44] are simultaneously fed into the network for jointly optimizing the objective function in Eq. 9, where α = 0.7 and β = 0.3, by the Adam [29] algorithm with a weight decay of 5e 4. and we set the initial learning rate to 1e 4 that is halved every 5, 000 iterations, and the whole training process converges until 50, 000 iterations. |