Incremental Few-Shot Learning for Pedestrian Attribute Recognition
Authors: Liuyu Xiang, Xiaoming Jin, Guiguang Ding, Jungong Han, Leida Li
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | By conducting extensive experiments on the benchmark dataset PETA and RAP under the incremental few-shot setting, we show that our method is able to perform the task with competitive performances and low resource requirements. |
| Researcher Affiliation | Academia | 1School of Software, Tsinghua University, Beijing, China 2WMG Data Science, University of Warwick, CV4 7AL Coventry, United Kingdom 3 School of Artiļ¬cial Intelligence, Xidian University, Xi an 710071, China |
| Pseudocode | Yes | Algorithm 1: N-way K-shot Episodic Training |
| Open Source Code | No | The paper does not include an unambiguous statement about releasing code for the work described, nor does it provide a direct link to a source-code repository. |
| Open Datasets | Yes | We evaluate on two pedestrian attribute benchmark datasets PETA and RAP. |
| Dataset Splits | Yes | We follow [Li et al., 2015] and [Li et al., 2016], and divide the PETA and RAP into train/val/test set with 5 random partitions. |
| Hardware Specification | No | The paper does not specify any particular hardware components such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Deep MAR[Li et al., 2015] with Res Net50 as backbone' but does not specify programming language versions, library versions (e.g., PyTorch, TensorFlow), or other software dependencies with version numbers. |
| Experiment Setup | Yes | In order to train our meta network, we adopt SGD and a small learning rate of 10-5, with a momentum of 0.9 and weight decay of 0.0005. We choose the batch size to be 32, train for 800 episodes and choose the sampled Nfake novel to be twice the expected Nnovel. |