Toward Re-Identifying Any Animal
Authors: Bingliang Jiao, Lingqiao Liu, Liying Gao, Ruiqi Wu, Guosheng Lin, PENG WANG, Yanning Zhang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have demonstrated the remarkable generalization capability of our Uni Re ID model. It showcases promising performance in handling arbitrary wildlife categories, offering significant advancements in the field of Re ID for wildlife conservation and research purposes. Our work is available in https://github.com/Jiao BL1234/wildlife. |
| Researcher Affiliation | Academia | 1School of Computer Science, Northwestern Polytechnical University, China 2Ningbo Institute, Northwestern Polytechnical University, China 3National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean, China 4The University of Adelaide, Australia 5Nanyang Technological University, Singapore |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our work is available in https://github.com/Jiao BL1234/wildlife. |
| Open Datasets | Yes | To address this challenge, we have created a comprehensive dataset called Wildlife-71, which includes Re ID data from 71 different wildlife categories. This dataset is the first of its kind to encompass multiple object categories in the realm of Re ID. Our work is available in https://github.com/Jiao BL1234/wildlife. |
| Dataset Splits | No | The paper specifies 67 training categories and 4 test categories, and mentions 'support data' for adaptation, but does not explicitly describe a traditional validation set split (e.g., percentages or counts). |
| Hardware Specification | Yes | Two NVIDIA TITAN GPUs are used for model training. |
| Software Dependencies | No | The paper mentions using 'pre-trained CLIP' as the backbone model but does not provide specific version numbers for any software dependencies like programming languages or libraries (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Our model is trained for 20 epochs. The learning rate is initialized to 1.0 10 4 and divided by 10 at the 15th epochs. Random flipping, random erasing, and color jittering are employed for data augmentation. |