Exploring Commonality and Individuality for Multi-Modal Curriculum Learning
Authors: Chen Gong
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we provide the empirical evaluations of our SMMCL by comparing it with five state-of-the-art methods on four typical image datasets. |
| Researcher Affiliation | Academia | Pattern Computing and Applications (PCA) Lab, Nanjing University of Science and Technology Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University E-mail: chen.gong@njust.edu.cn |
| Pseudocode | Yes | Algorithm 1 The ADMM process for solving Eq. (5) and Algorithm 2 SMMCL for graph-based label propagation |
| Open Source Code | No | The paper does not explicitly state that the source code for the described methodology is available, nor does it provide a link to a repository. |
| Open Datasets | Yes | The four image classification datasets include Architecture (Xu et al. 2016) for architecture style recognition, UIUC (Li and Li 2007) for sports event classification, MSRC (Criminisi 2004) for natural image classification, and Scene15 (Lazebnik, Schmid, and Ponce 2006) for scene categorization. |
| Dataset Splits | No | For all the datasets, we evaluate the classification accuracies of all compared methods under different sizes of labeled set, and the experiment under each size is implemented five times with different initially labeled examples. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers for its dependencies. |
| Experiment Setup | Yes | The trade-off parameters of our SMMCL are set to α = 1 and β = 0.5. The parameters in SMGI are optimally tuned to λ1 = 0.01 and λ2 = 0.1 via searching the grid {0.01, 0.1, 1, 10}, and γ and λ in AMMSS are set to 0.5 and 10, respectively. In DLP, we adjust α and λ to 0.05 and 0.1 accordingly as recommended by the authors. Besides, we set β =10, γ =3 and η =1.1 as they lead to the optimal results as revealed by (Gong et al. 2016b). |