Exchangeable Variable Models
Authors: Mathias Niepert, Pedro Domingos
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted extensive experiments to assess the efficiency and effectiveness of MEVMs as tractable probabilistic models for classification and probability estimation. A major objective is the comparison of MEVMs and naive Bayes models. We also compare MEVMs with several state of the art classification algorithms. For the probability estimation experiments, we compare MEVMs to latent naive Bayes models and several widely used tractable graphical model classes such as latent tree models.Table 2. Accuracy values for the two-class experiments. Bold numbers indicate significance (paired t-test; p < 0.01) compared to non-bold results in the same row. |
| Researcher Affiliation | Academia | Mathias Niepert MNIEPERT@CS.WASHINGTON.EDU Pedro Domingos PEDROD@CS.WASHINGTON.EDU Department of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA |
| Pseudocode | Yes | Algorithm 1 Expectation Maximization for MEVMs |
| Open Source Code | No | All implementations and data sets will be published. |
| Open Datasets | Yes | We used the SCIKIT 0.141 functions to load the 20Newsgroup train and test samples.The polarity data set is a well-known sentiment analysis problem based on movie reviews (Pang & Lee, 2004).We conducted experiments with a widely used collection of data sets (Van Haaren & Davis, 2012; Gens & Domingos, 2013; Lowd & Rooshenas, 2013). |
| Dataset Splits | No | Each synthetic data set consists of 10^6 training and 10000 test examples.We did not use the validation data. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments. |
| Software Dependencies | Yes | We used the SCIKIT 0.141 functions to load the 20Newsgroup train and test samples. |
| Experiment Setup | Yes | We applied Laplace smoothing with a constant of 0.1. The same parameter values were applied across all data sets and experiments.We set the latent variable s domain size to 20 for each problem and applied the same EM initialization for MEVMs and NB models.We ran EM until the average log-likelihood increase between iterations was less than 0.001. |