Partially Observed Exchangeable Modeling

Authors: Yang Li, Junier Oliva

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Despite its generality, extensive empirical evaluations show that our model achieves state-of-the-art performance across a range of applications.
Researcher Affiliation Academia 1Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA. Correspondence to: Yang Li <yangli95@cs.unc.edu>, Junier B. Oliva <joliva@cs.unc.edu>.
Pseudocode No The paper describes the model implementation and details, but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Our code is publicly available at https://github.com/lupalab/POEx.
Open Datasets Yes We first utilize our POEx model to impute missing values for a set of images from MNIST and Omniglot datasets... We use the dataset created by Wang et al. (2020)... For point cloud upsampling, we use the Model Net40 dataset... We evaluate on Shape Net dataset (Chang et al., 2015)... on two video datasets from Liao et al. (2020) and Xu et al. (2018)
Dataset Splits No The paper mentions training and testing but does not explicitly provide specific training/validation/test dataset splits (percentages, counts, or detailed methodology) that would allow for exact reproduction of data partitioning.
Hardware Specification No The paper does not provide specific details on the hardware used for experiments (e.g., GPU/CPU models, memory, or cloud instance types).
Software Dependencies No The paper mentions software components and frameworks like PyTorch (in the repo name) and various neural network architectures, but it does not specify exact version numbers for any libraries, frameworks, or solvers used in the experiments.
Experiment Setup No The paper describes architectural choices, data representations, and general model components (e.g., Set Transformer, ACFlow, Gaussian distributions for posterior/prior), but it does not provide specific numerical hyperparameters such as learning rates, batch sizes, number of epochs, or optimizer settings that are crucial for replicating the training process.