Comparison Knowledge Translation for Generalizable Image Classification

Authors: Zunlei Feng, Tian Qiu, Sai Wu, Xiaotuan Jin, Zengliang He, Mingli Song, Huiqiong Wang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Exhaustive experiments show that CCT-Net achieves surprising generalization ability on unseen categories and SOTA performance on target categories. In the experiments, we adopt five datasets, including MNIST [Le Cun et al., 1998], CIFAR-10 [Krizhevsky, 2009], STL-10 [Adam et al., 2011], Oxford-IIIT Pets [Parkhi et al., 2012], and mini-Image Net [Jia et al., 2009], to verify the effectiveness of the proposed CKT-Task and CCT-Net. Table 1: The comparison with SOTA methods.
Researcher Affiliation Collaboration 1Zhejiang University 2Ningbo Research Institute, Zhejiang University 3Hangzhou Honghua Digital Technology Co., Ltd. 4Shanghai Institute for Advanced Study of Zhejiang University 5Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies
Pseudocode Yes The complete training algorithm for CCT-Net is summarized in Algorithm 1&2 of the supplements.
Open Source Code No The paper does not provide any explicit statement about open-sourcing the code or a link to a code repository.
Open Datasets Yes In the experiments, we adopt five datasets, including MNIST [Le Cun et al., 1998], CIFAR-10 [Krizhevsky, 2009], STL-10 [Adam et al., 2011], Oxford-IIIT Pets [Parkhi et al., 2012], and mini-Image Net [Jia et al., 2009], to verify the effectiveness of the proposed CKT-Task and CCT-Net.
Dataset Splits No The paper mentions that 'categories of each dataset are evenly split into the source and target categories' and '80% of the target datasets are used as annotated samples for semi-supervised methods'. However, it does not provide specific train/validation/test percentages or sample counts for reproduction, nor does it reference predefined splits with citations for all datasets.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) that would be needed to replicate the experiment.
Experiment Setup No The paper describes network architecture details such as 'Vi T-B/16 is adopted as the backbone', '12 attention heads', and 'fully connected layer: 4096, Leaky Re LU, linear layer: 1024, linear layer: 256'. However, it does not provide specific hyperparameter values like learning rate, batch size, or number of epochs in the main text.