SuperVAE: Superpixelwise Variational Autoencoder for Salient Object Detection
Authors: Bo Li, Zhengxing Sun, Yuqi Guo8569-8576
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on five wildly-used benchmark datasets show that the proposed method achieves superior or competitive performance compared to other algorithms including the very recent state-of-the-art supervised methods. |
| Researcher Affiliation | Academia | Bo Li, Zhengxing Sun, Yuqi Guo State Key Laboratory for Novel Software Technology, Nanjing University, China |
| Pseudocode | No | The paper includes diagrams (e.g., Figure 2) but no structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states the network was "implemented on the basis of pytorch, an open source framework for CNN training and testing" but does not provide a link or explicit statement that their own implementation code is open-source or publicly available. |
| Open Datasets | Yes | We evaluate the performance of our method on five public datasets: ECSSD (Shi et al. 2016) dataset contains 1,000 natural images... ASD (Achanta et al. 2009) consists of 1000 images... SED (Borji et al. 2015) dataset has two non-overlapped subsets... SOD (Wang et al. 2017) dataset contains 300 images... |
| Dataset Splits | No | The paper lists datasets used for evaluation but does not specify explicit training, validation, and test splits (e.g., percentages or sample counts) needed for reproduction. |
| Hardware Specification | Yes | We run our method on an octa-core PC machine with an NVIDIA GTX 1080Ti GPU and an i7-6900 CPU. |
| Software Dependencies | No | Our proposed Super VAE network has been implemented on the basis of pytorch, an open source framework for CNN training and testing. The specific version number for pytorch is not provided. |
| Experiment Setup | Yes | During the training, we use ADAM stochastic gradient optimization method with batch size 10, and learning rate 0.005. For a single input image, the training process usually converges in 200 iterations. |