Earthfarsser: Versatile Spatio-Temporal Dynamical Systems Modeling in One Model

Authors: Hao Wu, Yuxuan Liang, Wei Xiong, Zhengyang Zhou, Wei Huang, Shilong Wang, Kun Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments and visualizations over eight human society physical and natural physical datasets demonstrates the stateof-the-art performance of Earth Farseer.
Researcher Affiliation Academia Hao Wu1, Yuxuan Liang2, Wei Xiong3*, Zhengyang Zhou1, Wei Huang3, Shilong Wang1, Kun Wang1* 1University of Science and Technology of China 2Hong Kong University of Science and Technology (Guangzhou) 3University of Tokyo 4Tsinghua University
Pseudocode No The paper describes the model architecture and components in text and diagrams (e.g., Fig 3), but it does not include any formal pseudocode or algorithm blocks.
Open Source Code Yes We release our code at https://github.com/easylearningscores/Earth Farseer.
Open Datasets Yes We conduct extensive experiments on eight datasets, including two human social dynamics system (II, III), five natural scene datasets (IV, V, VI, VII, VIII) and a synthetic datasets (I) in Tab 1, for verifying the generalization ability and effectiveness of our algorithm. See dataset details in Appendix C and D. Moving MNIST (Srivastava, Mansimov, and Salakhudinov 2015), KTH (Schuldt, Laptev, and Caputo 2004), SEVIR (Veillette, Samsi, and Mattioli 2020).
Dataset Splits No Table 1 provides 'N tr' (number of training instances) and 'N te' (number of test instances) for each dataset, but it does not specify the size or percentage of a validation set, nor does the text describe the split ratios for validation.
Hardware Specification Yes We implement our model using Py Torch framework and leverage the four A100-PCIE40GB as computing support. We measure the time it takes for the model to reach optimal performance by conducting fair executions across all frameworks on a Tesla V100-40GB.
Software Dependencies No The paper states 'We implement our model using Py Torch framework' but does not provide specific version numbers for PyTorch or any other software libraries or dependencies.
Experiment Setup Yes We conduct experiments by selecting 2-14 Te Dev blocks layers, under settings with batch size as 16, training epochs as 300, and learning rate as 0.01 (Adam optimizer).