On Tractable Computation of Expected Predictions
Authors: Pasha Khosravi, YooJung Choi, Yitao Liang, Antonio Vergari, Guy Van den Broeck
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically show our algorithm to consistently outperform standard imputation techniques on a variety of datasets. |
| Researcher Affiliation | Academia | Pasha Khosravi, Yoo Jung Choi, Yitao Liang, Antonio Vergari, and Guy Van den Broeck Department of Computer Science University of California, Los Angeles {pashak,yjchoi,yliang,aver,guyvdb}@cs.ucla.edu |
| Pseudocode | Yes | Algorithm 1 EC2(n, m) |
| Open Source Code | Yes | Our implementation of the algorithm and experiments are available at https://github.com/UCLA-Star AI/mc2. |
| Open Datasets | Yes | We construct a 6-dataset testing suite, four of which are common regression benchmarks from several domains [13], and the rest are classification on MNIST and FASHION datasets [36, 35]. |
| Dataset Splits | No | No specific details on train/validation/test splits (e.g., percentages or exact counts) are provided. The paper mentions monitoring loss on a 'held out set' during structure learning and using 'different percentages of missing features' for evaluation, but not specific train/validation splits. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for experiments are mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.x, PyTorch x.x) are listed in the paper. |
| Experiment Setup | Yes | For RCs, we adapt the parameter and structure learning of LCs [18], substituting the logistic regression objective with a ridge regression during optimization. For structure learning of both LCs and RCs, we considered up to 100 iterates while monitoring the loss on a held out set. For PSDDs we employ the parameter and structure learning of [19] with default parameters and run it up to 1000 iterates until no significant improvement is seen on a held out set. |