Towards Intelligent Visual Understanding under Minimal Supervision
Authors: Dingwen Zhang
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results have demonstrated the effectiveness of the proposed algorithms. Comprehensive evaluations of three benchmark datasets and comparisons with nine state-of-the-art algorithms demonstrate the superiority of this work. |
| Researcher Affiliation | Academia | Dingwen Zhang School of Automation, Northwestern Polytechnical University zhangdingwen2006yyy@gmail.com |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any links to open-source code or explicitly state that the code is publicly available. |
| Open Datasets | Yes | Comprehensive evaluations of three benchmark datasets and comparisons with nine state-of-the-art algorithms demonstrate the superiority of this work. [Han et al., 2015 a] J. Han, D. Zhang, X. Hu, L. Guo, F. Wu. Background Prior-Based Salient Object Detection via Deep Reconstruction Residual. TCSVT, 25(8): 1309-1321, 2015. [Zhang et al., 2015 a] D. Zhang, J. Han, C. Li, J. Wang. Co-saliency detection via looking deep and wide. In CVPR, pages 2994-3002, 2015. |
| Dataset Splits | No | The paper mentions evaluating on benchmark datasets but does not specify any training, validation, or test splits, nor does it refer to standard splits with citations. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of "deep learning architectures" but does not specify any software names with version numbers (e.g., libraries, frameworks, or specific solvers). |
| Experiment Setup | No | The paper describes the general approaches (e.g., "stacked denoising autoencoders with deep learning architectures"), but it does not provide specific experimental setup details such as hyperparameters (learning rate, batch size, number of epochs), optimizer settings, or model initialization details. |