Discriminative Structure Learning of Arithmetic Circuits

Authors: Amirmohammad Rooshenas, Daniel Lowd

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Based on our experiments, DACLearn learns models that are more accurate and compact than other tractable generative and discriminative baselines. We run our experiments using 20 datasets with 16 to 1556 binary-valued variables
Researcher Affiliation Academia Amirmohammad Rooshenas and Daniel Lowd Department of Computer and Information Science University of Oregon Eugene, OR 97401, USA {rooshena,lowd}@uoregon.edu
Pseudocode Yes Algorithm 1 shows the high-level pseudo-code of the DACLearn algorithm.
Open Source Code No The paper does not provide any links to open-source code or explicitly state that the code for the methodology is available.
Open Datasets Yes We run our experiments using 20 datasets with 16 to 1556 binary-valued variables, which also used by Gens and Domingos (2013) and Rooshenas and Lowd (2014).
Dataset Splits Yes For all of the above methods, we learn the model using the training data and tune the hyper-parameters using the validation data, and we report the average CLL over the test data.
Hardware Specification No The paper mentions bounding learning time to 24 hours but does not provide specific details about the hardware used (e.g., CPU, GPU models, memory).
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes To tune the hyper-parameters, we used a grid search over the hyper-parameter space.