FlightBERT++: A Non-autoregressive Multi-Horizon Flight Trajectory Prediction Framework

Authors: Dongyue Guo, Zheng Zhang, Zhen Yan, Jianwei Zhang, Yi Lin

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on a real-world dataset demonstrated that the Flight BERT++ outperformed the competitive baselines in both FTP performance and computational efficiency.
Researcher Affiliation Academia College of Computer Science, Sichuan University, Chengdu 610000, China {dongyueguo, zhaeng}@stu.scu.edu.cn, tankzhen@163.com, {zhangjianwei, yilin}@scu.edu.cn
Pseudocode No The paper describes the architecture and steps of the proposed framework, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the public release of its source code.
Open Datasets No To validate the proposed framework, a real-world flight trajectory dataset was collected from an ATC system. The paper does not provide concrete access information (link, DOI, citation with authors/year) for this dataset.
Dataset Splits Yes After preprocessing, a total of 8643 flight trajectories in the dataset that further split into train, validate, and test subsets. Specifically, the trajectory of the first 7 days is used to train the FTP models while 8th and 9th are for validation and testing, respectively.
Hardware Specification No The paper does not specify the hardware (e.g., GPU/CPU models) used for running the experiments.
Software Dependencies No The paper mentions 'The Adam optimizer' but does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes The embedding size of the trajectory points, horizons, and differentials is set to 128, i.e., 128 channels for the Conv1D networks in the proposed framework. The number of the Transformer blocks in the trajectory encoder and differential-prompted decoder are both set to 4. The number of hidden states of the Transformer blocks is set to 768. An attention operator with 4 heads is applied to all Transformer blocks in the proposed framework. In the experiments, we use the latest 3-minute observations to predict the flight status of the future 5 minutes, i.e., predicting 15 trajectory points based on 9 observed trajectory points. The Adam optimizer with an initial learning rate of 10-4 is applied to train all the above deep learning-based models.