Sequential Attention Source Identification Based on Feature Representation
Authors: Dongpeng Hou, Zhen Wang, Chao Gao, Xuelong Li
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments with the SOTA methods demonstrate the higher detection performance and scalability in different scenarios of TGASI. |
| Researcher Affiliation | Academia | Dongpeng Hou1,2 , Zhen Wang1,2 , Chao Gao2 and Xuelong Li2 1School of Mechanical Engineering, Northwestern Polytechnical University (NWPU) 2School of Artificial Intelligence, Optics and Electronics (i OPEN), Northwestern Polytechnical University (NWPU) |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/cgao-comp/TGASI. |
| Open Datasets | Yes | Six social networks are selected to evaluate the performance of all localization methods 1. 1The six datasets are available at: http://snap.stanford.edu |
| Dataset Splits | No | The paper mentions a '10-fold cross-validation strategy to divide the training dataset and the test dataset', but does not explicitly state a separate 'validation' dataset split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions types of models (e.g., GNN, GRU) and general deep learning concepts, but does not provide specific software names with version numbers (e.g., PyTorch 1.9, Python 3.8) that would allow for reproducible setup of software dependencies. |
| Experiment Setup | No | The paper mentions 'low infection rates' and 'early stopping mechanism' as settings, but it does not specify concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations for the model. |