Symbolic Graph Reasoning Meets Convolutions

Authors: Xiaodan Liang, Zhiting Hu, Hao Zhang, Liang Lin, Eric P. Xing

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show incorporating SGR significantly improves plain Conv Nets on three semantic segmentation tasks and one image classification task.
Researcher Affiliation Collaboration School of Intelligent Systems Engineering, Sun Yat-sen University 2Carnegie Mellon University 3 School of Data and Computer Science, Sun Yat-sen University 4Petuum Inc.
Pseudocode No The paper includes a diagram in Figure 2 showing 'Implementation details of one SGR layer' with boxes and operations, but it does not contain a formal pseudocode block or a section explicitly labeled 'Algorithm'.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology or a link to a code repository.
Open Datasets Yes We evaluate on three public benchmarks... Specifically, Coco-Stuff [4]... ADE20k [52]... PASCAL-Context [34]... We further conduct studies for image classification task on CIFAR-100 [21].
Dataset Splits Yes Coco-Stuff [4]... including 9,000 for training and 1,000 for testing. ADE20k [52] consists of 20,210 images for training and 2,000 for validation... PASCAL-Context [34] includes 4,998 images for training and 5105 for testing... CIFAR-100 [21] consisting of 50K training images and 10K test images.
Hardware Specification Yes We conduct all experiments using Pytorch, 2 GTX TITAN X 12GB cards on a single server.
Software Dependencies No The paper states 'We conduct all experiments using Pytorch,' but it does not specify the version number of Pytorch or any other software dependencies.
Experiment Setup Yes Dl and Dc for feature dimensions... are thus set as 256... We adopt the standard SGD optimization... set the base learning rate to 2.5e-3 for newly initialized layers and 2.5e-4 for pretrained layers. We train 64 epochs for Coco-Stuff and PASCAL-Context, and 120 epochs for ADE20K dataset... the batch size is used as 6. The input crop size is set as 513 513. For CIFAR-100: We set Dl and Dc as 128. During training, we use a mini-batch size of 64 on two GPUs using a cosine learning rate scheduling [16] for 600 epochs.