Causal Intervention for Human Trajectory Prediction with Cross Attention Mechanism

Authors: Chunjiang Ge, Shiji Song, Gao Huang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method has consistent improvements on ETH-UCY datasets with four baseline methods and achieves competitive performances with existing methods. We demonstrate the effectiveness of SEAD on trajectory prediction dataset ETH (Pellegrini, Ess, and Gool 2010) and UCY (Lerner, Chrysanthou, and Lischinski 2007). Our SEAD method could be applied to both RNN-based and CNN-based frameworks... We show that our method achieves consistent improvements on both four baseline models. With SEAD, STAGT, Social-STGCNN, TF and Causal-STGAT have an improvement of 0.05/0.09, 0.03/0.04, 0.04/0.08 and 0.03/0.07 on the ADE/FDE metrics respectively.
Researcher Affiliation Academia Chunjiang Ge, Shiji Song, Gao Huang* Department of Automation, BNRist, Tsinghua University gecj20@mails.tsinghua.edu.cn, {shijis, gaohuang}@tsinghua.edu.cn
Pseudocode No The paper includes Figure 4, which is a structural diagram of the Social Cross Attention module, but it does not contain pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the source code for the described methodology, nor does it provide a direct link to a code repository.
Open Datasets Yes Our results are trained on the ETH (Pellegrini, Ess, and Gool 2010) and UCY (Lerner, Chrysanthou, and Lischinski 2007) datasets.
Dataset Splits Yes We leverage the leave-one-out protocol to split the training, validation and test dataset. Train and validate on four domains and test on the remaining one.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only discusses software implementations and training details.
Software Dependencies No The paper mentions implementing the method on 'RNN-based STGAT, CNN-based Social-STGCNN and Transformer-based Trajectory Forecasting Transformer (TF)' and refers to 'M-LSTM, a G-LSTM and a graph attention model (GAT)', 'TPCNN' but does not specify version numbers for these frameworks or any underlying programming languages (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The only difference is that the learning rate for our SCA module is 0.04 while other LSTM and GAT modules keep the same learning rate with STGAT. ... The learning rate for SCA module in Causal STGAT is set to 0.01. ... The initial learning rate is 0.01, and decayed to 0.002 after 150 epochs.