Deep Convolutional Sum-Product Networks

Authors: Cory J. Butz, Jhonatan S. Oliveira, André E. dos Santos, André L. Teixeira3248-3255

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our preliminary results on image sampling are encouraging, since the DCSPN sampled images exhibit variability. Experiments on image completion show that DCSPNs significantly outperform competing methods by achieving several state-of-the-art mean squared error (MSE) scores in both left-completion and bottom-completion in benchmark datasets.
Researcher Affiliation Academia Cory J. Butz butz@cs.uregina.ca University of Regina Canada Jhonatan S. Oliveira oliveira@cs.uregina.ca University of Regina Canada Andr e E. dos Santos dossantos@cs.uregina.ca University of Regina Canada Andr e L. Teixeira teixeira@cs.uregina.ca University of Regina Canada
Pseudocode Yes Algorithm 1 Mask MPE Backward Propagation
Open Source Code No The paper does not state that its own source code is publicly available, nor does it provide a link.
Open Datasets Yes Table 1 gives the mean squared error (MSE) scores for left-completion and bottom-completion in the Olivetti Face dataset (Samaria and Harter 1994). Table 2 shows left-completion and bottom-completion MSE scores in the Caltech datasets (Fei-Fei, Fergus, and Perona 2007).
Dataset Splits No For each dataset, we randomly set aside one third (up to 50 images) for testing. This specifies a test split but does not mention a separate validation split or explicit training/validation/test splits.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software like TensorFlow and ADAM but does not provide specific version numbers for these or other key software components.
Experiment Setup Yes For training, we use ADAM (Kingma and Ba 2014) with a learning rate of 0.005. ... Here, we use the hyperparameter values suggested in (Amos 2016) and 100 epochs during training. ... A convolutional layer follows every representational layer and every sum-pooling layer. All convolutional layers have filter sizes height-by-width matching the layer size. Two sum-pooling layers follow each convolutional layer: one with a window size of 1-by-2 and the other 2-by-1. Alternate the window sizes of 1-by-2 and 2-by-1 with 2-by-2 and 2-by-2 every n layers. This hyperparameter n is tuned per dataset and varied between 70 and 100 in our experiments.