Learning Attributes from the Crowdsourced Relative Labels

Authors: Tian Tian, Ning Chen, Jun Zhu

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate promise on discovering diverse and convincing attributes, which significantly improve the performance of the challenging zero-shot learning tasks. To demonstrate the efficacy of our methods, we conduct experiments on three image datasets.
Researcher Affiliation Academia Tian Tian, Ning Chen, Jun Zhu Dept. of Comp. Sci. & Tech., CBICR Center, State Key Lab for Intell. Tech. & Systems MOE Key lab of Bioinformatics, Bioinformatics Division and Center for Synthetic & Systems Biology TNList, Tsinghua University, Beijing, China {tiant16@mails, ningchen@, dcszj@}tsinghua.edu.cn
Pseudocode No No pseudocode or algorithm blocks were found in the paper. The derivations are put in Appendix A4, but the text explicitly says: "For brevity, we put the derivations in Appendix A4." implying these are mathematical derivations, not pseudocode.
Open Source Code No The paper does not provide an explicit statement or link for the open-sourcing of their code for the described methodology. It mentions an appendix for derivations but not code.
Open Datasets Yes The yellow flowers dataset is a subset of the Oxford flower dataset (Nilsback and Zisserman 2008) with 13 yellow flower species. The animals dataset is a subset of the animals with attributes (Aw A) dataset (Lampert, Nickisch, and Harmeling 2009) with 16 species.
Dataset Splits Yes Specifically, we randomly split the 16 species into two parts, i.e. the source domain and the target domain. Then we train classifiers using the data and labels from the source domain, and transfer them into target domain classifiers with the help of the attributes. During experiments, we use the deep features extracted by a 19 layer convolution neural network (VGG19) (Simonyan and Zisserman 2015) for classification, and the ESZSL algorithm (Romera-Paredes and Torr 2015), which requires binary attributes, to do zero-shot learning. Since each crowdsourced attribute has three possible values, we split it into two binary attributes to fit the learning algorithm. Thus we have 2K crowdsourced attributes in total. To show the performances under different conditions, we train classifiers on the source domains with the amounts of species MS vary in [11, 12, 13, 14]. So the sizes of the target domains MT vary in [5, 4, 3, 2] correspondingly.
Hardware Specification No The paper does not provide any specific hardware details such as CPU, GPU, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions "a 19 layer convolution neural network (VGG19) (Simonyan and Zisserman 2015)" and "the ESZSL algorithm (Romera-Paredes and Torr 2015)" but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes During label aggregation, We set ρ = 0.6 for the rabbits and the yellow flowers datasets based on our experience. For the animals dataset, the results on multiple values of ρ are shown. The model parameters θ are uniformly initialized and then updated from the data.