Energy-Based Processes for Exchangeable Data

Authors: Mengjiao Yang, Bo Dai, Hanjun Dai, Dale Schuurmans

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We develop an efficient training procedure for EBPs that demonstrates state-of-the-art performance on a variety of tasks such as point cloud generation, classification, denoising, and image completion1. ... Finally, we evaluate the effectiveness of EBPs with NCI training on a set of supervised (e.g., 1D regression and image completion) and unsupervised tasks (e.g., point-cloud feature extraction, generation and denoising), demonstrating state-of-the-art performance across a range of scenarios.
Researcher Affiliation Collaboration 1Google Research, Brain Team 2University of Alberta. Correspondence to: Mengjiao Yang <sherryy@google.com>, Bo Dai <bodai@google.com>.
Pseudocode Yes Algorithm 1 Neural Collapsed Inference
Open Source Code Yes 1The code is available at https://github.com/googleresearch/google-research/tree/master/ebp.
Open Datasets Yes We separately train two conditional EBPs on the MNIST (Le Cun, 1998) and the Celeb A dataset (Liu et al., 2015). ... We train one unconditional EBP per category on airplane, chair, and car from the Shape Net dataset (Wu et al., 2015). ... We then extract the Deep Sets output (in our model) for each point cloud in Model Net40 (Wu et al., 2015)...
Dataset Splits No The paper mentions using specific datasets for training and evaluation but does not provide explicit details on train/validation/test splits, such as percentages or sample counts, nor does it refer to predefined splits with citations for these specific experiments.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, memory, or detailed computer specifications used for running the experiments.
Software Dependencies No The paper discusses various software components and models (e.g., neural networks, deepsets, RNNs, MLPs) but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup No The paper states 'Details of each experiment can be found in Appendix F' and mentions general parameters like 'learning rate, number of training iterations, number of MCMC steps per iteration, and number of negative samples' but does not provide their specific values in the main text.