Domain-Invariant Projection Learning for Zero-Shot Recognition

Authors: An Zhao, Mingyu Ding, Jiechao Guan, Zhiwu Lu, Tao Xiang, Ji-Rong Wen

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our model outperforms the state-of-the-art alternatives by significant margins. and 4 Experiments section with subsections 4.1 Datasets and Settings, 4.2 Comparative Results, 4.3 Further Evaluations.
Researcher Affiliation Collaboration 1Beijing Key Laboratory of Big Data Management and Analysis Methods School of Information, Renmin University of China, Beijing 100872, China 2School of EECS, Queen Mary University of London, London E1 4NS, U.K. 3Samsung AI Centre, Cambridge, U.K.
Pseudocode Yes Algorithm 1 Domain-Invariant Projection Learning
Open Source Code No The paper does not include an explicit statement about releasing its own source code or provide a link to a code repository for its methodology.
Open Datasets Yes Five widely-used benchmark datasets are selected in this paper: Animals with Attributes (Aw A) [25], CUB-200-2011 Birds (CUB) [56], a Pascal&Yahoo (a PY) [13], and SUN Attribute (SUN) [41]. One large-scale dataset is ILSVRC2012/2010 [47] (Im Net)...
Dataset Splits Yes ZSL Settings. (1) Standard ZSL: This setting is widely used in previous works [2, 44]. The seen/unseen class splits of the five datasets are presented in Table 1. ... (3) Generalized ZSL: The third ZSL setting that emerges recently [43, 7] is the generalized setting under which the test set contains data samples from both seen and unseen classes. This setting is clearly more reflective of real-world application scenarios. ... Parameter Settings. ...the parameters are selected by class-wise cross-validation on the training set.
Hardware Specification No The paper refers to visual features extracted using models like Goog Le Net, VGG19, and Res Net101, but it does not specify the particular hardware (e.g., GPU models, CPU types) used for running its own experiments.
Software Dependencies No The paper mentions general software components like skip-gram text model and word2vec but does not provide specific version numbers for any programming languages, libraries, or frameworks used in its implementation (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes Parameter Settings. Our full DIPL model (including superclasses) has only two free parameters to tune: α (0, 1) (see Step 2 in Algorithm 1) and r (the number of superclasses used in Sec. 3.4). As in [50, 24], the parameters are selected by class-wise cross-validation on the training set. In this paper, we empirically set αt = 0.99tα (α0 = α (0, 1)) and β = 0.01 in all experiments. Also, for superclasses: ...with r = 500 clusters...