Learning Spatial Similarity Distribution for Few-shot Object Counting
Authors: Yuanwu Xu, Feifan Song, Haofeng Zhang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive experiments on two renowned public benchmark datasets, i.e., FSC-147 [Ranjan et al., 2021] and CARPK [Hsieh et al., 2017]. The results clearly illustrate that our approach surpasses the performance of current state-of-the-art methods. |
| Researcher Affiliation | Academia | Yuanwu Xu , Feifan Song , Haofeng Zhang School of Artificial Intelligence, Nanjing University of Science and Technology {xuyuanwu, sff, zhanghf}@njust.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/CBalance/SSD. |
| Open Datasets | Yes | FSC-147 is a comprehensive multi-class few-shot object counting dataset. It comprises a total of 6,135 images covering 89 distinct object categories. ... To facilitate experimentation, the dataset is further divided into training, validation, and testing subsets, with each subset containing 29 non-overlapping object categories. CARPK is a class-specific car counting dataset, which consists of 1448 images of parking lots from a bird s view. ... The training set comprises three scenes, while a separate scene is designated for test. |
| Dataset Splits | Yes | To facilitate experimentation, the dataset is further divided into training, validation, and testing subsets, with each subset containing 29 non-overlapping object categories. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments (e.g., GPU models, CPU types, or memory). |
| Software Dependencies | No | The paper mentions 'Adam W' as the optimizer but does not specify version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages. |
| Experiment Setup | Yes | We apply Adam W [Loshchilov and Hutter, 2017] as the optimizer with a learning rate of 1 10 4 and the learning rate decays with a rate of 0.95 after each epoch. The batch size is 4 and the model is trained for 100 epochs. ... γ and η in DIS method are set to 32 and 12. |