Learning Image-Specific Attributes by Hyperbolic Neighborhood Graph Propagation

Authors: Xiaofeng Xu, Ivor W. Tsang, Xiaofeng Cao, Ruiheng Zhang, Chuancai Liu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five benchmark datasets demonstrate the significant superiority of the learned image-specific attributes over the original class-specific attributes in the zero-shot object classification task.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Nanjing University of Science and Technology 2Faculty of Engineering and Information Technology, University of Technology Sydney 3School of Mechatronical Engineering, Beijing Institute of Technology 4Collaborative Innovation Center of Io T Technology and Intelligent Systems, Minjiang University
Pseudocode Yes Algorithm 1 Image-Specific Attribute Learning Model Input: Image features; Class-specific attributes A={ai}L i=1 for each label. Output: Image-specific attributes A={ ai}N i=1 for each sample. // Construct Hyperbolic Neighborhood Graph 1: Calculate the hyperbolic distance between each sample pair by Eq.(2); 2: Construct hyperbolic neighborhood graph by Eq.(3); // Identify Inconsistent Samples 3: Calculate weights of the edges in HNG by Eq.(4); 4: Calculate the neighborhood consistency of each vertex in HNG by Eq.(5); 5: Identify inconsistent examples by Eq.(7); // Refine Attributes for Inconsistent Samples 6: Refine attributes of the inconsistent samples.
Open Source Code No The paper does not provide any explicit statement or link indicating the release of open-source code for the described methodology.
Open Datasets Yes Experiments are conducted on five zero-shot classification benchmark datasets: (1) Animal with Attribute (Aw A) [Lampert et al., 2014], (2) Animal with Attribute 2 (Aw A2) [Xian et al., 2018a], (3) attribute-Pascal-Yahoo (a PY) [Farhadi et al., 2009], (4) Caltech-UCSD Bird 200-2011 (CUB) [Welinder et al., 2011], and (5) SUN Attribute Database (SUN) [Patterson and Hays, 2012].
Dataset Splits No Table 1 provides 'number of classes in training + test' and 'number of images in training + test', but does not explicitly detail a separate validation set split.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies No The paper mentions 'Res Net101 [He et al., 2016] is used to extract deep features for experiments' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes In Eq.(4), 'p is an arbitrary positive real number (usually sets to 1 or 2)'. In Eq.(7), 'θ is a hyperparameter that constraints the consistency degree when identifying inconsistent samples'. Section 4.4 'Parameter Sensitivity Analysis' discusses the influence of the hyperparameter θ and states 'The accuracies reach the peak when θ is close to 0.7'.