Temporal Attribute Prediction via Joint Modeling of Multi-Relational Structure Evolution
Authors: Sankalp Garg, Navodita Sharma, Woojeong Jin, Xiang Ren
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on five specially curated datasets for this problem and show a consistent improvement in time series prediction results. |
| Researcher Affiliation | Academia | Sankalp Garg1 , Navodita Sharma2 , Woojeong Jin3 and Xiang Ren3 1Indian Institute of Technology Delhi 2Indian Institute of Technology Madras 3University of Southern California {sankalp2621998, navoditasharma16}@gmail.com, {woojeong.jin, xiangren}@usc.edu |
| Pseudocode | No | The paper describes the model mathematically and textually, but it does not include any explicitly labeled 'Algorithm' or 'Pseudocode' blocks. |
| Open Source Code | Yes | We release the data and code of model DARTNET for future research1. 1https://github.com/INK-USC/DArtNet |
| Open Datasets | Yes | We release the data and code of model DARTNET for future research1. 1https://github.com/INK-USC/DArtNet |
| Dataset Splits | Yes | Dataset # Train # Valid # Test # Nodes # Rel # Granularity AGT 463,188 57,898 57,900 58 178 Monthly CAC(small) 2070 388 508 90 1 Yearly CAC(large) 116,933 167,047 334,096 20,000 1 Yearly MTG 270,362 39,654 74,730 44 90 Monthly AGG 3,879,878 554,268 1,108,538 6,635 246 Monthly |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU models, or memory specifications used for running experiments. |
| Software Dependencies | No | The paper mentions 'All models are implemented in Py Torch using Adam Optimizer for training.' but does not provide specific version numbers for PyTorch or other software dependencies. |
| Experiment Setup | Yes | All models are implemented in Py Torch using Adam Optimizer for training. The best hyperparameters are chosen using the validation dataset. Typically increasing value of λ gives better results, and the best results on each dataset are reported. ... In our experiments we use the functions as a single-layered feed-forward network. |