Attribute Propagation Network for Graph Zero-Shot Learning
Authors: Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang4868-4875
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments with two zero-shot learning settings and five benchmark datasets, APNet achieves either compelling performance or new state-of-the-art results. [...] 5 Experiments 5.1 Datasets 5.2 Implementation Details 5.3 Evaluation Criterion 5.4 Experimental Results |
| Researcher Affiliation | Academia | Lu Liu,1 Tianyi Zhou,2 Guodong Long,1 Jing Jiang,1 Chengqi Zhang1 1Center for AI, School of Computer Science, University of Technology Sydney 2Paul G. Allen Center for Computer Science & Engineering, University of Washington lu.liu-10@student.uts.edu.au, tianyizh@uw.edu, {guodong.long, jing.jiang, chengqi.zhang}@uts.edu.au |
| Pseudocode | No | The paper describes its method in prose and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | We used five widely-used zero-shot learning datasets in our experiments: AWA1 (Lampert, Nickisch, and Harmeling 2014), AWA2 (Xian et al. 2019), SUN (Patterson and Hays 2012), CUB (Welinder et al. 2010) and a PY (Farhadi et al. 2009). |
| Dataset Splits | Yes | To avoid overlaps between the test sets and Image Net-1K, which is used for pretraining backbones, we followed the splits proposed in (Xian et al. 2019). [...] Table 1: Datasets Statistics. #* denotes the number of *. Tr-S , Te-S and Te-U denotes seen classes in training, seen classes in test and unseen classes in test, respectively. |
| Hardware Specification | No | We also acknowledge the support of NVIDIA Corporation and Google Cloud with the donation of GPUs and computation credits. The paper mentions using “GPUs” and “Google Cloud” but does not specify exact GPU models, CPU types, or detailed cloud instance specifications. |
| Software Dependencies | No | The paper mentions using Adam optimizer but does not specify version numbers for any key software components, programming languages, or libraries used in the implementation (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We trained our APNet with Adam (Kingma and Ba 2015) for 360 epochs with weight decay factor of 0.0001. The initial learning rate was 0.00002 with a decrease of 0.1 every 240 epochs. The number of iterations in every epoch under an N-way-K-shot training strategy was X tr /NK, where N was 30 and K was 1 in our exeperiments. The temperature γ1 was 10 and γ2 was 30. Transformation functions gi and f were linear transformations. The threshold for connecting edges was set to cosine40o 0.76. All nonlinear functions were Re LU except for σ, which was implemented using Sigmoid to map the result between 0 and 1. |