Temporally Disentangled Representation Learning under Unknown Nonstationarity

Authors: Xiangchen Song, Weiran Yao, Yewen Fan, Xinshuai Dong, Guangyi Chen, Juan Carlos Niebles, Eric Xing, Kun Zhang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluations demonstrated the reliable identification of time-delayed latent causal influences, with our methodology substantially outperforming existing baselines that fail to exploit the nonstationarity adequately and then, consequently, cannot distinguish distribution shifts.
Researcher Affiliation Collaboration Xiangchen Song1 Weiran Yao2 Yewen Fan1 Xinshuai Dong1 Guangyi Chen1,3 Juan Carlos Niebles2 Eric Xing1,3 Kun Zhang1,3 1Carnegie Mellon University 2Salesforce Research 3Mohamed bin Zayed University of Artificial Intelligence
Pseudocode No The paper describes its model architecture and optimization process but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes The code can be found via https://github.com/xiangchensong/nctrl.
Open Datasets Yes Video data Mo Seq Dataset We test NCTRL framework to analyze mouse behavior video data from Wiltschko et al. [19]... Dataset can be accessed via https://dattalab.github.io/moseq2-website/index.html
Dataset Splits No The paper implicitly uses a training process for its model, but it does not provide specific details on validation splits, such as percentages, sample counts, or explicit references to predefined validation sets.
Hardware Specification Yes All experiments are done in a GPU workstation with CPU: Intel i7-13700K, GPU: NVIDIA RTX 4090, Memory: 128 GB.
Software Dependencies No The paper details network architectures and components like Conv2D and Leaky ReLU but does not list specific software dependencies with version numbers (e.g., PyTorch version, Python version, specific library versions).
Experiment Setup No The paper discusses the overall model architecture and optimization objectives but does not provide specific details on hyperparameters (e.g., learning rate, batch size) or other system-level training settings in the main text.