A Fixed-Point Approach to Unified Prompt-Based Counting
Authors: Wei Lin, Antoni B. Chan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The effectiveness of this method is substantiated both theoretically and experimentally. Additionally, a contrastive training scheme is implemented to mitigate dataset bias inherent in current class-agnostic counting datasets, a strategy whose effectiveness is confirmed by our ablation study. Our model excels in prominent class-agnostic datasets and exhibits superior performance in cross-dataset adaptation tasks. Experiments Datasets. We employ the FSC-147 (Ranjan et al. 2021) for training and evaluating the proposed prompt counting model. It encompasses 147 distinct categories. Additionally, CARPK (Hsieh, Lin, and Hsu 2017) car counting dataset is used to assess the model s capability for cross-dataset adaptation. Evaluation metrics. We assess performance using the metrics of Mean Absolute Error (MAE, MAE = 1 N PN i=1 |Ci C i|) and root Mean Squared Error (MSE, 1 N PN i=1(Ci C i)2). |
| Researcher Affiliation | Academia | Wei Lin, Antoni B. Chan Department of Computer Science, City University of Hong Kong Tat Chee 83, Kowloon Tong, Hong Kong SAR, China elonlin24@gmail.com, abchan@cityu.edu.hk |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about open-source code release or a link to a code repository. |
| Open Datasets | Yes | Experiments Datasets. We employ the FSC-147 (Ranjan et al. 2021) for training and evaluating the proposed prompt counting model. It encompasses 147 distinct categories. Additionally, CARPK (Hsieh, Lin, and Hsu 2017) car counting dataset is used to assess the model s capability for cross-dataset adaptation. |
| Dataset Splits | Yes | Experiments Datasets. We employ the FSC-147 (Ranjan et al. 2021) for training and evaluating the proposed prompt counting model... Evaluation metrics. In the formulation we use N to represent the number of samples in either the validation or test set... Prompt Counting Results As shown in Table 1, we conduct a comparative analysis across different prompt types... Box prompts: ... achieves MAE of 16.36 and 13.15 on the validation set respectively. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Spacy' and 'LLa MA-Adapter V2' with citations, but does not provide specific version numbers for these or other software dependencies like deep learning frameworks (e.g., PyTorch, TensorFlow) that would be necessary for replication. |
| Experiment Setup | Yes | Experimental results demonstrate that the optimal performance is achieved when T is set to 2. ... We trained four models with T {1, 2, 3, 4} for comparison, as shown in Figure 6(a). The best performance is obtained with T = 2. |