Learning to Decouple Complex Systems

Authors: Zihan Zhou, Tianshu Yu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on synthetic and real-world datasets show the advantages of our approach when facing complex and cluttered sequential data compared to the state-of-the-art.
Researcher Affiliation Academia 1The Chinese University of Hong Kong, Shenzhen 2Shenzhen Institute of Artificial Intelligence and Robotics for Society. Correspondence to: Tianshu Yu <yutianshu@cuhk.edu.cn>.
Pseudocode No The paper describes the method using mathematical equations and textual explanations but does not include a formal 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Our code is available at https://github.com/LOGO-CUHKSZ/DNS.
Open Datasets Yes The human action dataset contains three types of human actions, which are hand clapping, hand waving, and jogging (Schuldt et al., 2004).
Dataset Splits Yes We generate 50k training samples, 5k validation samples, and 5k test samples." and "We use 5-fold cross-validation (except for the three-body dataset because training processes of all models are very stable) and early stop if the validation accuracy is not improved for 10 epochs.
Hardware Specification No The paper does not provide specific details about the hardware used, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions 'Python' and 'torch.rand' (implying PyTorch) but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We use the Adam optimizer and set the learning rate to 1e-3 with a cosine annealing scheduler with eta min=1e-4 (5e-5 on the three-body dataset). Except for the spring dataset, we apply gradient clipping with the max gradient norm equal to 0.1. We use cumulative gradients on the three body dataset with batch size equal to 1 and update after 128 times forward. We set the batch size to 128 and 1 on the spring and human action datasets, respectively.