Weak-shot Fine-grained Classification via Similarity Transfer
Authors: Junjie Chen, Li Niu, Liu Liu, Liqing Zhang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments demonstrate the effectiveness of our weak-shot setting and our Sim Trans method. Datasets and codes are available at https://github.com/bcmi/Sim Trans-Weak-Shot-Classification. |
| Researcher Affiliation | Academia | Junjie Chen, Li Niu , Liu Liu, Liqing Zhang Mo E Key Lab of Artificial Intelligence, Department of Computer Science and Engineering, Shanghai Jiao Tong University {chen.bys, ustcnewly, shirlley}@sjtu.edu.cn, zhang-lq@cs.sjtu.edu.cn |
| Pseudocode | No | The paper describes its methods in prose and with diagrams but does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | Datasets and codes are available at https://github.com/bcmi/Sim Trans-Weak-Shot-Classification. |
| Open Datasets | Yes | We conduct experiments based on three fine-grained datasets: Comp Cars [52] (Car for short), CUB [48], and FGVC [26]. |
| Dataset Splits | No | Table 1 provides 'Train' and 'Test' statistics for the datasets, and the text mentions 'base training/test set' and 'novel training set'. However, there is no explicit separate 'validation' split with specific counts or percentages provided for the datasets, other than mentioning 'cross-validation' for hyperparameter tuning. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using ResNet50 as a backbone but does not specify any software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions, CUDA versions). |
| Experiment Setup | Yes | The classification loss and the adversarial loss are balanced with a hyper-parameter β, set as 0.1 via cross-validation. ... where α is a hyper-parameter set as 0.1 by cross-validation. ... We use Cm = 10 and M = 100 for both training and testing of Sim Net. |