Conditional Temporal Neural Processes with Covariance Loss

Authors: Boseon Yoo, Jiwoo Lee, Janghoon Ju, Seijun Chung, Soyeon Kim, Jaesik Choi

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In order to show the validity of the proposed loss, we conduct extensive sets of experiments on real-world datasets with state-of-the-art models and discuss the benefits and drawbacks of the proposed Covariance Loss.
Researcher Affiliation Collaboration 1Graduate School of AI, Korea Advanced Institute of Science and Technology, Republic of Korea. 2Department of Computer Science and Engineering, Ulsan National Institute of Science and Technology, Republic of Korea. 3Ineeji Inc., Republic of Korea.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes To demonstrate the validity of the proposed Covariance Loss, we employ neural networks that are designed to explicitly consider the spatial and temporal dependencies... and conduct extensive sets of experiment on well-known benchmark datasets... For experiments, we employ state-of-the-art models such as GMNN (Qu et al., 2019) and SSP (Izadi et al., 2020) and compare classification accuracy as shown in Table 1... DNN MNIST... CNN CIFAR-10... GMNN Pub Med... SSP Cora... Pe MSD7(M) is a highway traffic dataset from California... METR-LA dataset contains records of statistics on traffic speed... Pe MS-BAY contains the velocity of cars...
Dataset Splits Yes For experiments, we share training and test datasets used by the original work.
Hardware Specification Yes Our experiments are conducted in an environment with Intel(R) Xeon(R) Gold 6226 CPU @ 2.70GHz and NVIDIA Quadro RTX 600 GPU cards.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers required to replicate the experiment.
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameters (learning rate, batch size, number of epochs) or detailed training configurations in the main text. It only generally states that 'For experiments, we employ state-of-the-art models... and use parameters presented in (Yu et al., 2018)' but does not list them explicitly for its own setup.