Set Prediction without Imposing Structure as Conditional Density Estimation

Authors: David W Zhang, Gertjan J. Burghouts, Cees G. M. Snoek

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate on a variety of datasets the capability to learn multi-modal densities and produce different plausible predictions. Our approach is competitive with previous set prediction models on standard benchmarks.
Researcher Affiliation Collaboration David W. Zhang1, Gertjan J. Burghouts2, Cees G. M. Snoek1 1University of Amsterdam {w.d.zhang, cgmsnoek}@uva.nl 2TNO {gertjan.burghouts}@tno.nl
Pseudocode No The paper provides mathematical equations and describes steps of the proposed methods, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at: https://github.com/davzha/DESP.
Open Datasets Yes Following the setup from Zhang et al. (2019), we convert MNIST (Le Cun et al., 2010) into point-clouds... We re-purpose Celeb A (Liu et al., 2015) for subset anomaly detection...
Dataset Splits No The paper mentions using training and test sets but does not provide explicit details about the exact percentages, absolute counts, or methodology for training/validation/test dataset splits, nor does it specify predefined splits for the used datasets beyond the test partition.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., programming language versions, library versions, or specific solver versions).
Experiment Setup Yes We adopt the same neural network architecture, hyper-parameters and padding scheme as Zhang et al. (2019), to facilitate a fair comparison. Both g and f are instantiated as 2-layer MLPs with 256 hidden dimensions.