Weakly Supervised Collective Feature Learning From Curated Media

Authors: Yusuke Mukuta, Akisato Kimura, David Adrian, Zoubin Ghahramani

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the effectiveness of the proposed method through several experiments.
Researcher Affiliation Collaboration Yusuke Mukuta,1,3 Akisato Kimura,1,2 David B. Adrian,1,4 Zoubin Ghahramani2,5 1. NTT Communication Science Laboratories, Japan. 2. Department of Engineering, University of Cambridge, United Kingdom. 3. The University of Tokyo, Japan. 4. Technical University of Munich, Germany. 5. Uber AI Labs, USA.
Pseudocode No The paper describes its proposed models and methods using mathematical equations and textual explanations, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper states, 'We will publish the dataset at http://www.kecl.ntt.co.jp/people/kimura.akisato/socialweb4.html.' but does not provide concrete access to the source code for the methodology described in the paper.
Open Datasets Yes We used various datasets from several different domains, such as food classification (UEC-FOOD100 (Matsuda, Hoashi, and Yanai 2012), UEC-FOOD256 (Kawano and Yanai 2014)), fashion classification (Hipster Wars (Kiapour et al. 2014), Apparel (Bossard et al. 2012)) and image sentiment analysis (Instagram (Katsurai and Satoh 2016)).
Dataset Splits Yes To separate the dataset into training and test data, we first selected 10% (or 1 if the node had less than 10 edges) of all the edges for each image node as test data, and used the rest for training.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using VGG16 and Word2Vec models but does not provide specific version numbers for any software components, such as libraries or programming languages.
Experiment Setup Yes We exploit a hinge loss for the loss function and ℓ2-norm regularization of all the model parameters except the bias terms.