Principled Simplicial Neural Networks for Trajectory Prediction
Authors: T. Mitchell Roddenberry, Nicholas Glaze, Santiago Segarra
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We then demonstrate the effectiveness of this archi tecture in extrapolating trajectories on synthetic and real datasets, with particular emphasis on the gains in generalizability to unseen trajectories. and 6. Experiments |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Rice University, Houston, Texas, USA. |
| Pseudocode | Yes | Algorithm 1 SCo Ne for Trajectory Prediction |
| Open Source Code | Yes | Code available at https: //github.com/nglaze00/SCo Ne_GCN. |
| Open Datasets | Yes | Data available from NOAA/AOML at http://www.aoml. noaa.gov/envids/gld/ and as supplementary material. and Following the example of Schaub et al. (2020), we generate a simplicial complex by drawing 400 points uniformly at random in the unit square, and then apply ing a Delaunay triangulation to obtain a mesh, after which we remove all nodes and edges in two regions, pictured in Fig. 2(a). |
| Dataset Splits | No | For the synthetic dataset, it states: 'We generate 1000 such trajectories for our experiment, using 800 of them for training and 200 for testing.' For the Berlin dataset, it states: 'divided into an 80/20 train/test split.' A distinct validation split is not explicitly provided. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for running experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | In evaluating our proposed architecture for trajectory pre diction, we consider SCo Ne with 3 layers, where each layer has F = 16 hidden features. By default, we use the tanh φ( ) activation function, but we also use Re LU and sigmoid ac tivations to compare. In training SCo Ne, we minimize the cross-entropy between the softmax output z and the ground truth fnal nodes in each batch of training samples. |