Discovering Intrinsic Spatial-Temporal Logic Rules to Explain Human Actions
Authors: Chengzhi Cao, Chao Yang, Ruimao Zhang, Shuang Li
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the model s superior interpretability and prediction performance on pedestrian and NBA basketball player datasets, both achieving promising results.In this section, we provide some implementation details and show ablation studies as well as visualization to evaluate the performance of our framework. We compare our model with several state-of-the-art approaches... |
| Researcher Affiliation | Academia | Chengzhi Cao1,2 , Chao Yang1, Ruimao Zhang1, Shuang Li1 1The Chinese University of Hong Kong (Shenzhen) 2University of Science and Technology of China |
| Pseudocode | No | The paper describes the learning algorithm and its steps (E-step, M-step) in detail within Section 4 and illustrates the framework in Figure 2, but it does not include a formally structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide any explicit statement about making the source code available, nor does it include a link to a code repository. |
| Open Datasets | Yes | Stanford Drone Dataset. This dataset consists of more than 11,000 persons in 20 scenes captured from the campus of Stanford University in bird s eye view. We follow the [27] standard train-test split, and predict the future 4.8s (12 frames) using past 3.2s (8 frames). |
| Dataset Splits | No | The paper mentions training and testing splits: 'All models were trained and tested on the same split of the dataset, as suggested by the benchmark.' and 'We follow the [27] standard train-test split'. However, it does not explicitly specify a validation split or details for cross-validation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. It only mentions training the network. |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer' but does not provide specific version numbers for any software components, libraries, or programming languages used in the experiments. |
| Experiment Setup | Yes | We train the network using Adam optimizer with a learning rate of 0.001 and batch size 16 for 500 epochs. |