Sequential Neural Processes
Authors: Gautam Singh, Jaesik Yoon, Youngsung Son, Sungjin Ahn
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments, we evaluate the proposed methods in dynamic (non-stationary) regression and 4D scene inference and rendering. |
| Researcher Affiliation | Collaboration | Gautam Singh Rutgers University singh.gautam@rutgers.edu; Jaesik Yoon SAP jaesik.yoon01@sap.com; Youngsung Son ETRI ysson@etri.re.kr; Sungjin Ahn Rutgers University sungjin.ahn@rutgers.edu |
| Pseudocode | No | The paper describes the models and their components (e.g., T-Conv DRAW, Conv LSTM) but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code, nor does it include links to a code repository. |
| Open Datasets | No | The paper describes generating its own datasets for regression ("We generate a dataset consisting of sequences of functions...") and 2D/3D scene inference ("The 2D environments consist of...", "The 3D environments consist of..."), but does not provide access information (link, DOI, citation) for these datasets to be publicly available. |
| Dataset Splits | No | The paper mentions "performing validation" and "a held-out set of 1600 episodes" for evaluation, but it does not provide specific details on dataset split percentages, sample counts, or the methodology for splitting into training, validation, and test sets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions "Tensorflow" in its references and various model components like "Conv LSTM," but it does not specify version numbers for any software dependencies or libraries used for implementation. |
| Experiment Setup | Yes | In experiments, we simply set α = 0 at the start of the training and set α = 1 when the reconstruction loss had saturated (see Appendix C.2.5). |