Consensus Style Centralizing Auto-Encoder for Weak Style Classification
Authors: Shuhui Jiang, Ming Shao, Chengcheng Jia, Yun Fu
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate the effectiveness of SCAE and CSCAE on both public and newly collected datasets when compared with the most recent state-of-the-art works. We evaluate our methods on two applications: fashion style classification and manga style classification. |
| Researcher Affiliation | Academia | Department of Electrical & Computer Engineering, Northeastern University, Boston, USA College of Computer & Information Science, Northeastern University, Boston, USA {shjiang, mingshao, cjia, yunfu}@ece.neu.edu |
| Pseudocode | Yes | Algorithm 1 Style Centralizing Auto-Encoder INPUT: Style feature X including weak style feature. OUTPUT: Style centralizing feature Zk, model parameters: W (c) 1,k, W (c) 2,k, b(c) 1,k, b(c) 2,k, k [1, L 1], and c {1, ..., Nc}. |
| Open Source Code | No | The paper does not provide concrete access to source code for the described methodology. |
| Open Datasets | Yes | Hipster Wars (Kiapour et al. 2014). It contains 5 categories and 1893 images of fashion style (Kiapour et al. 2014). Online Shopping. It is collected by us from online shopping websites (e.g., Nordstrom.com , barneys.com ) containing more than 30,000 images...Due to the space limitation, more descriptions of Online Shopping dataset will be provided in the future release. Manga (Chu and Chao 2014). Chu et al. collected a shonen and shojo mangas databased including 240 panels. |
| Dataset Splits | No | The paper states 'a 9:1 training to test ratio is used for training-test process', but does not explicitly mention a validation set or its split. |
| Hardware Specification | No | The paper does not provide specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of an L-BFGS optimizer and SVM classifier, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | The default settings of SCAE, MC-SCAE and CSCAE are L = 4, and the layer size is 400. In addition we set ρ=0.05, λ = 10 5 and β = 10 2. We use nearest neighbor as the classifier and empirically set the number of neighbors as 5. |