Representing Spatial Trajectories as Distributions
Authors: Didac Suris Coll-Vinent, Carl Vondrick
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show our method s advantage over baselines in prediction tasks. Our experiments on human movement datasets show that our method can accurately predict the past and future of a trajectory segment, as well as the interpolation between two different segments, outperforming autoregressive baselines. Additionally, it can do so for any continuous point in time. |
| Researcher Affiliation | Academia | Dídac Surís Columbia University didac.suris@columbia.edu Carl Vondrick Columbia University vondrick@cs.columbia.edu |
| Pseudocode | No | The paper describes the model architecture and training process but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | See trajectories.cs.columbia.edu for video results and code. Yes, they are in the supplementary materials. |
| Open Datasets | Yes | We extract human movement trajectories from the Fine Gym [45], Diving48 [29] and Fis V [57] datasets, which correspond to gymnastics, diving and figure skating, respectively. |
| Dataset Splits | No | The paper mentions training and testing, and refers to Appendix B and C for more details, but it does not explicitly state specific training, validation, or test dataset split percentages or sample counts in the main text. |
| Hardware Specification | No | The paper states that training details are in Appendix C, but the main text does not specify any particular hardware (e.g., GPU models, CPU types) used for the experiments. |
| Software Dependencies | No | The paper mentions using a Transformer Encoder, ResNet, OpenPose, and cites PyTorch, but it does not provide specific version numbers for any of these software components. |
| Experiment Setup | No | The paper describes the general architecture and some training concepts (e.g., triplet loss, reparameterization trick, using box embeddings), but it refers to Appendix C for more details and does not provide specific hyperparameter values like learning rates, batch sizes, or optimizer settings in the main text. |