Cross-Granularity Graph Inference for Semantic Video Object Segmentation

Authors: Huiling Wang, Tinghuai Wang, Ke Chen, Joni-Kristian Kämäräinen

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on two popular semantic video object segmentation benchmarks and demonstrate that it advances the state-of-the-art by achieving superior accuracy performance than other leading methods.
Researcher Affiliation Collaboration Huiling Wang1, Tinghuai Wang2, Ke Chen1, Joni-Kristian K am ar ainen1 1Department of Signal Processing, Tampere University of Technology, Finland 2Nokia Technologies, Finland {huiling.wang, ke.chen, joni.kamarainen}@tut.fi, tinghuai.wang@nokia.com
Pseudocode No The paper does not contain a clearly labeled pseudocode or algorithm block. While it describes steps of the proposed methods and provides equations, it lacks the structured formatting of pseudocode.
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We evaluate on two large-scale video object segmentation datasets, You Tube-Objects [Prest et al., 2012], ego Motion [Shankar Nagaraja et al., 2015], which are totally over 30,000 frames. The categories of these two datasets are subsets of the pretrained 20 classes of PASCAL VOC 2012 in R-CNN.
Dataset Splits No The paper uses standard datasets but does not explicitly provide specific training/test/validation dataset splits (e.g., percentages, sample counts, or explicit standard split names for their own setup).
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU models, CPU types, or memory) used for running the experiments.
Software Dependencies No The paper mentions various models and algorithms used (e.g., Fast R-CNN, SR-DCF tracker, VGG-16 Net, alpha expansion) but does not provide specific version numbers for any software dependencies.
Experiment Setup No The paper does not provide specific details on the experimental setup, such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific training configurations.