Asking Friendly Strangers: Non-Semantic Attribute Transfer

Authors: Nils Murrugarra-Llerena, Adriana Kovashka

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach on 272 attributes from five domains: animals, objects, scenes, shoes and textures.
Researcher Affiliation Academia Nils Murrugarra-Llerena, Adriana Kovashka Department of Computer Science University of Pittsburgh {nineil, kovashka}@cs.pitt.edu
Pseudocode No The paper describes the network formulation and optimization using equations and text, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or a link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We use five datasets: Animals with Attributes (Lampert, Nickisch, and Harmeling 2009), a Pascal/a Yahoo Objects (Farhadi et al. 2009), SUN Scenes (Patterson et al. 2014), Shoes (Kovashka, Parikh, and Grauman 2015), and Textures (Caputo, Hayman, and Mallikarjuna 2005).
Dataset Splits Yes For each dataset, we split the data in 40% for training the source models, 10% for training the target models, 10% for selection of the optimal network parameters, and 40% to test the final trained network on the target data.
Hardware Specification No The paper mentions using computing resources from 'Extreme Science and Engineering Discovery Environment (XSEDE) and the Data Exacell at the Pittsburgh Supercomputing Center (PSC)', but does not provide specific hardware details such as GPU models, CPU models, or memory specifications.
Software Dependencies No The paper states: 'We implemented the described network using the Theano (Theano Development Team 2016) and Keras (Chollet 2015) frameworks', but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes First, we did parameter exploration using 70 random configurations of learning rate and L2 regularizer weight. Each configuration ran for five epochs with the ADAM optimizer. Then the configuration with the highest accuracy on a validation set was selected and a network with this configuration ran for 150 epochs. In the end of each epoch, the network was evaluated on a validation set, and training was stopped when the validation accuracy began to decrease. The loss weights were selected similar to other transfer learning work (Tzeng et al. 2015) where the main task has a weight of 1, and side tasks have a weight of 0.1.