Cross-Modal Object Tracking: Modality-Aware Representations and a Unified Benchmark
Authors: Chenglong Li, Tianhao Zhu, Lei Liu, Xiaonan Si, Zilin Fan, Sulan Zhai1289-1296
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the dataset are conducted, and we demonstrate the effectiveness of the proposed algorithm in two representative tracking frameworks against 19 state-of-the-art tracking methods. |
| Researcher Affiliation | Academia | Chenglong Li1,2,4, Tianhao Zhu3*, Lei Liu3*, Xiaonan Si3, Zilin Fan3, Sulan Zhai5 1Information Materials and Intelligent Sensing Laboratory of Anhui Province, Hefei, China 2Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, Hefei, China 3School of Computer Science and Technology, Anhui University, Hefei, China 4School of Artiļ¬cial Intelligence, Anhui University, Hefei, China 5School of Mathematical Sciences, Anhui University, Hefei, China |
| Pseudocode | No | The paper describes methods and processes but does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks. Figures illustrate architectures and training stages, but not code. |
| Open Source Code | Yes | Dataset, code, model and results are available at https://github.com/mmiclcl/source-code. |
| Open Datasets | Yes | Dataset, code, model and results are available at https://github.com/mmiclcl/source-code. |
| Dataset Splits | Yes | To this end, we randomly split the testing and training sets of our dataset with the ratio of 1 : 2. |
| Hardware Specification | Yes | The experiments are run on two platforms with Intel(R) Xeon(R) Silver 4210 CPU (32G RAM), Ge Force RTX 3090 GPU and Intel(R) Xeon(R) Silver 4210 CPU (32G RAM), Ge Force RTX 1080Ti GPU for Di MP-50 and RT-MDNet respectively. |
| Software Dependencies | No | The paper does not specify version numbers for any software libraries, frameworks, or programming languages used in the experiments (e.g., PyTorch 1.x, Python 3.x). |
| Experiment Setup | Yes | The learning rate of the network parameters is set to one-tenth of the default learning rate of the baseline network, and the number of iterations remains the same... The initial learning rate is set to 1e 6 and 1e 4, and the number of iterations is set to 50 and 1000, respectively. |