Set Prediction in the Latent Space

Authors: Konpat Preechakul, Chawan Piansaddhayanon, Burin Naowarat, Tirasan Khandhawit, Sira Sriswasdi, Ekapol Chuangsuwanich

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on several set prediction tasks, including image captioning and object detection, demonstrate the effectiveness of our method.
Researcher Affiliation Academia 1Department of Computer Engineering, Chulalongkorn University 2Department of Mathematics, Faculty of Sciences, Mahidol University 3Computational Molecular Biology Group, Faculty of Medicine, Chulalongkorn University
Pseudocode Yes Algorithm 1 Single training step of Latent Set Prediction (LSP) and Algorithm 2 Gradient Cloning with Rejection (GCR)
Open Source Code Yes Code is available at https://github.com/phizaz/latent-set-prediction.
Open Datasets Yes We used our modified MNIST dataset [22] in this experiment. We re-purposed the CLEVR dataset [23]... We used MIMIC-CXR dataset [16]
Dataset Splits No The paper mentions '5,000 training and 1,000 test images' for the modified MNIST dataset, but does not explicitly provide validation splits for any of the datasets used.
Hardware Specification No The paper mentions 'We included a typical training time for a run on all experiments' but does not specify the type of GPUs, CPUs, or other hardware used.
Software Dependencies No The paper mentions software like 'Hugging Face s transformers' and 'spacy' but does not provide specific version numbers for these or other software dependencies required for replication.
Experiment Setup No The paper describes some general aspects of the experimental setup, such as dataset sizes and some task-specific details (e.g., predicting 10 sentences), but it does not provide specific hyperparameters like learning rate, batch size, or optimizer settings within the provided text.