Adversarial Zero-shot Learning With Semantic Augmentation
Authors: Bin Tong, Martin Klinkigt, Junwen Chen, Xiankun Cui, Quan Kong, Tomokazu Murakami, Yoshiyuki Kobayashi
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed method was extensively evaluated on smalland largescale datasets, showing a significant improvement over stateof-the-art methods. |
| Researcher Affiliation | Industry | Bin Tong, Martin Klinkigt, Junwen Chen, Xiankun Cui, Quan Kong, Tomokazu Murakami, Yoshiyuki Kobayashi R&D Group, Hitachi, Japan {bin.tong.hh, martin.klinkigt.ut, quan.kong.xz}@hitachi.com {tomokazu.murakami.xr, yoshiyuki.kobayashi.gp}@hitachi.com |
| Pseudocode | Yes | Algorithm 1 GANZrl |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | The small-scale datasets were Animals with Attribute (Aw A), CUB200-2011 (CUB) and SUN with Attribute (SUN). The large-scale datasets were ILSRC2010 (Image Net-1) and ILSVRC2012/ILSVRC2010 (Image Net-2). |
| Dataset Splits | No | The paper mentions training and test sets but does not provide specific details on training, validation, and test dataset splits (e.g., percentages, sample counts, or explicit standard split citations for reproducibility). |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for running the experiments (e.g., GPU/CPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper mentions various algorithms and models (e.g., RMSprop, Adam, VGG-19, Inception-ResNet, batch normalization) but does not provide specific version numbers for any software dependencies, libraries, or frameworks used. |
| Experiment Setup | Yes | The setting of parameters shared by all datasets is as follows. RMSprop was used as the gradient descent algorithm for training the CGAN model with an initial learning rate of 10 4. Adam was used for learning the mapping, and its initial learning rate was set to 10 4 and 5 10 5. The activation functions were chosen from sigmoid, tanh, and leakyrelu. The batch size was set to 64. The margins m and mg of Equation 3 and Equation 4 were set to 0.1 or 0.2. The margin mc of Equations 6 and 7 was set to 1 or 2 times the margin of m. The dimension of the noise vector was set to 30, and the dimension of the common space was set to 1024 or 2048. |