Label Embedding with Partial Heterogeneous Contexts
Authors: Yaxin Shi, Donna Xu, Yuangang Pan, Ivor W. Tsang, Shirui Pan4926-4933
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that the label embeddings obtained with PHCLE achieve superb performance in image classification task and exhibit good interpretability in the downstream label similarity analysis and image understanding task. Experiments In this section, we first evaluate the label embeddings with zero-shot image classification task. Then, we demonstrate the label interpretability of PHCLE with two tasks: label similarity analysis and novel image understanding. |
| Researcher Affiliation | Academia | Yaxin Shi, Donna Xu, Yuangang Pan, Ivor W. Tsang, Shirui Pan Centre for Artificial Intelligence(CAI), University of Technology Sydney, Australia {Yaxin.Shi, Donna.Xu, Yuangang.Pan}@student.uts.edu.au, {Ivor.Tsang, Shirui.Pan}@uts.edu.au |
| Pseudocode | Yes | Algorithm 1: Alternating Minimization for PHCLE. Algorithm 2: FISTA for updating U. |
| Open Source Code | No | The paper mentions "the released codes" for a baseline method (TADW) but does not provide any statement or link for the source code of their own methodology (PHCLE). There is no mention of code availability in supplementary materials or via a repository link. |
| Open Datasets | Yes | We learn task free label embeddings for the 1000 labels of Image Net 2012 dataset (Russakovsky et al. 2015). Specifically, we conduct experiments on the data that overlaps the 1000 Image Net labels for the AWA (26) and a PY (22) datasets. Details of the adopted datasets are presented in Table 1. |
| Dataset Splits | No | The paper provides training and testing sample counts in Table 1, but does not explicitly mention a separate validation dataset split, its size, or how it was used to reproduce the experiment. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. It lacks any mention of the computing environment beyond general statements about training. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with versions). It does not mention software dependencies needed to replicate the experiment. |
| Experiment Setup | Yes | Parameter setting: For PHCLE, we set K = 50, d = 100, Inner Max Iter = 50, stepsize = 10 5. The trade-off parameters are set via grid search (Wang et al. 2017) from {10 2, 10 1, 1, 101, 102}, for each baseline. The number of negative samples k is set to be 10 for EMF. Stopping condition of the algorithm is set as: F ( ˆU) F ( ˆU) / F ( ˆU) < ϵ, where ϵ is a small tolerance value (ϵ = 0.0001 in our experiment). We adopt the Res Net features (Xian, Schiele, and Akata 2017) as the image features and apply three representative zero-shot learning methods, ESZSL (Romera-Paredes et al. 2015), Con SE (Norouzi et al. 2013) and SJE (Akata et al. 2015), (all with their default parameters) to all the embedding methods. |