Sum-Product Autoencoding: Encoding and Decoding Representations Using Sum-Product Networks
Authors: Antonio Vergari, Robert Peharz, Nicola Di Mauro, Alejandro Molina, Kristian Kersting, Floriana Esposito
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results on several multilabel classification problems demonstrate that SPAE is competitive with state-of-the-art autoencoder architectures, even if the SPNs were never trained to reconstruct their inputs. |
| Researcher Affiliation | Collaboration | Antonio Vergari antonio.vergari@uniba.it University of Bari, Italy Robert Peharz rp587@cam.ac.uk University of Cambridge, UK Nicola Di Mauro nicola.dimauro@uniba.it University of Bari, Italy Alejandro Molina alejandro.molina@tu-dortmund.de TU Dortmund, Germany Kristian Kersting kersting@cs.tu-darmstadt.de TU Darmstadt, Germany Floriana Esposito floriana.esposito@uniba.it University of Bari, Italy ... RP acknowledges the support by Arm Ltd. |
| Pseudocode | No | No pseudocode or algorithm blocks were found. |
| Open Source Code | Yes | Vergari, A.; Peharz, R.; Di Mauro, N.; Molina, A.; Kersting, K.; and Esposito, F. 2017. Code and supplemental material for the present paper. github.com/arranger1044/spae. |
| Open Datasets | Yes | For all experiments we use 10 standard MLC benchmarks: Arts, Business, Cal, Emotions, Flags, Health, Human, Plant, Scene and Yeast, preprocessed as binary data in 5 folds as in (Di Mauro, Vergari, and Esposito 2016). |
| Dataset Splits | Yes | For all experiments we use 10 standard MLC benchmarks: Arts, Business, Cal, Emotions, Flags, Health, Human, Plant, Scene and Yeast, preprocessed as binary data in 5 folds as in (Di Mauro, Vergari, and Esposito 2016). |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory) used for experiments are mentioned in the main paper. While supplemental material is referenced for 'learning settings', this doesn't explicitly guarantee hardware specifications. |
| Software Dependencies | No | No specific software dependencies with version numbers are provided in the main paper. While supplemental material is referenced for 'learning settings', this doesn't explicitly guarantee specific versioned software dependencies. |
| Experiment Setup | Yes | We employ for p only linear predictors to highlight the ability of the learned embeddings to disentangle the represented spaces. For scenario I) p is a L2-regularized logistic regressor (LR) while for II) and III) it may be either a LR or a ridge regressor (RR) if it is predicting CAT or ACT embeddings. As a proxy measure to the meaningfulness of the learned representations, we consider their JAC and EXA prediction scores and compare them against the natural baseline of LR being applied to raw input and output, X LR = Y. We also employ a standard fully-supervised predictor for structured output, a Structured SVM employing CRFs to perform tractable inference by Chow-Liu trees on the label space (CRFSSVM) (Finley and Joachims 2008). ... Note that for all the aforementioned models we had to perform an extensive grid search first to determine their structure and then to tune their hyperparameters (see supplemental material). |