Robust Optimization over Multiple Domains
Authors: Qi Qian, Shenghuo Zhu, Jiasheng Tang, Rong Jin, Baigui Sun, Hao Li4739-4746
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The empirical study on real-world fine-grained visual categorization and digits recognition tasks verifies the effectiveness and efficiency of the proposed framework. We conduct the experiments on training deep neural networks over multiple domains. |
| Researcher Affiliation | Industry | Qi Qian, Shenghuo Zhu, Jiasheng Tang, Rong Jin, Baigui Sun, Hao Li Alibaba Group, Bellevue, WA, 98004, USA {qi.qian, shenghuo.zhu, jiasheng.tjs, jinrong.jr, baigui.sbg, lihao.lh}@alibaba-inc.com |
| Pseudocode | Yes | Alg. 1 summarizes the main steps of the approach. Algorithm 2 Stochastic Regularized Robust Optimization |
| Open Source Code | No | The paper does not provide any concrete access information such as a specific repository link, explicit code release statement, or mention of code in supplementary materials. |
| Open Datasets | Yes | Given the data sets of VGG cats&dogs (Parkhi et al. 2012) and Image Net (Russakovsky et al. 2015), we extract the shared labels between them and then generate the subsets with desired labels from them, respectively. There are two benchmark data sets for the task: MNIST and SVHN. MNIST (Le Cun et al. 1998)... SVHN (Netzer et al. 2011)... |
| Dataset Splits | Yes | MNIST (Le Cun et al. 1998) is collected for recognizing handwritten digits. It contains 60, 000 images for training and 10, 000 images for test. SVHN (Netzer et al. 2011) is for identifying the house numbers from Google Street View images, which consists of 604, 388 training images and 26, 032 test images. |
| Hardware Specification | Yes | All experiments are implemented on an NVIDIA Tesla P100 GPU. |
| Software Dependencies | No | The paper mentions software components like SGD, Res Net18, and Alex Net, but does not provide specific version numbers for any libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | Deep models are trained with SGD and the size of each mini-batch is set to 200. For the methods learning with multiple domains, the number of examples from different domains are the same in a mini-batch and the size is m = 200/K. ... Res Net18 ... initialized with the parameters learned from ILSVRC2012 ... and we set the learning rate as ηw = 0.005 for fine-tuning. ... Alex Net ... and set the learning rate as ηw = 0.01. |