Decision Sum-Product-Max Networks
Authors: Mazen Melibari, Pascal Poupart, Prashant Doshi
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we propose a new extension to SPNs, called Decision Sum-Product-Max Networks (Decision-SPMNs), that makes SPNs suitable for discrete multi-stage decision problems. We present an algorithm that solves Decision-SPMNs in a time that is linear in the size of the network. We also present algorithms to learn the parameters of the network from data. |
| Researcher Affiliation | Academia | Mazen Melibari, Pascal Poupart, Prashant Doshi, David R. Cheriton School of Computer Science, University of Waterloo, Canada Dept. of Computer Science, University of Georgia, Athens, GA 30602, USA |
| Pseudocode | Yes | Algorithm 1: Decision-SPMN Parameters Learning |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that source code for the described methodology is publicly available. |
| Open Datasets | No | The paper describes a theoretical "dataset D" for parameter learning (e.g., "Let D be a dataset that consists of |D| instances..."), but it does not specify any publicly available datasets, nor does it provide links, DOIs, or formal citations for data access. |
| Dataset Splits | No | The paper does not provide specific details about dataset splits (e.g., percentages, sample counts, or citations to predefined splits) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run experiments, such as CPU or GPU models, or memory specifications. The paper describes algorithms and theoretical properties, but does not report empirical results from actual hardware. |
| Software Dependencies | No | The paper does not mention any specific software dependencies with version numbers (e.g., programming languages, libraries, or solvers). |
| Experiment Setup | No | The paper describes theoretical algorithms and does not report empirical experiments. Therefore, it does not include details on experimental setup such as hyperparameters, optimization settings, or training configurations. |