Let All Be Whitened: Multi-Teacher Distillation for Efficient Visual Retrieval
Authors: Zhe Ma, Jianfeng Dong, Shouling Ji, Zhenguang Liu, Xuhong Zhang, Zonghui Wang, Sifeng He, Feng Qian, Xiaobo Zhang, Lei Yang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two landmark image retrieval datasets and one video retrieval dataset demonstrate the effectiveness of our proposed method, and its good balance of retrieval performance and efficiency. |
| Researcher Affiliation | Collaboration | Zhe Ma1, Jianfeng Dong2,4*, Shouling Ji1, Zhenguang Liu1*, Xuhong Zhang1, Zonghui Wang1*, Sifeng He3, Feng Qian3, Xiaobo Zhang3, Lei Yang3 1Zhejiang University, 2Zhejiang Gongshang University, 3Ant Group, 4Zhejiang Key Lab of E-Commerce |
| Pseudocode | No | The paper describes the methods in prose and mathematical equations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is released at https://github.com/Maryeon/whiten mtd. |
| Open Datasets | Yes | we use the clean version of Google Landmark Dataset v2 (GLDv2-clean) (Weyand et al. 2020) as the training set and two additional independent datasets RParis6k (RPar) and ROxford5k (ROxf) (Radenovi c et al. 2018) for evaluation. |
| Dataset Splits | No | The paper specifies the use of GLDv2-clean as the training set and ROxf/RPar for evaluation, but it does not explicitly provide details about a separate validation set split (e.g., percentages, sample counts, or specific predefined validation splits). |
| Hardware Specification | No | The paper mentions 'The computation overhead is measured by the number of GFLOPs when a model encodes a given image of size 1024 768', which is a performance metric, but it does not specify the hardware (e.g., specific GPU or CPU models) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1') needed to replicate the experiment. |
| Experiment Setup | No | The paper states 'We provide implementation details and additional experiment results in supplementary materials.' indicating that specific experimental setup details, such as hyperparameter values, are not fully provided in the main text. |