A Benchmark and Asymmetrical-Similarity Learning for Practical Image Copy Detection
Authors: Wenhao Wang, Yifan Sun, Yi Yang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that ASL outperforms state-of-the-art methods by a clear margin, confirming that solving the symmetric-asymmetric conflict is critical for ICD. |
| Researcher Affiliation | Collaboration | Wenhao Wang1,2*, Yifan Sun2, Yi Yang3 1 Re LER, University of Technology Sydney 2 Baidu Research 3 Zhejiang University wangwenhao0716@gmail.com, sunyifan01@baidu.com, yangyics@zju.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | The NDEC dataset and code are available at https://github.com/Wang Wenhao0716/ASL. |
| Open Datasets | Yes | Based on existing ICD datasets, this paper constructs a new dataset by additionally adding 100, 000 and 24, 252 hard negative pairs into the training and test set, respectively. The NDEC dataset and code are available at https://github.com/Wang Wenhao0716/ASL. |
| Dataset Splits | No | The paper mentions 'training' and 'test' sets with specific image counts but does not provide details about a distinct 'validation' dataset split or how it was derived. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions various deep learning techniques and losses (e.g., Cos Face, triplet loss) but does not provide specific software dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8). |
| Experiment Setup | No | The paper mentions replacing 8192-dim features with 2048-dim features and using Cos Face as a loss function, but it does not provide specific hyperparameters such as learning rate, batch size, or optimizer settings for the experimental setup. |