Self-Paced Boost Learning for Classification
Authors: Te Pi, Xi Li, Zhongfei Zhang, Deyu Meng, Fei Wu, Jun Xiao, Yueting Zhuang
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments on several real-world datasets show the superiority of SPBL in terms of both effectiveness and robustness. |
| Researcher Affiliation | Academia | 1Zhejiang University, Hangzhou, China; 2Xi an Jiaotong University, Xi an, China |
| Pseudocode | Yes | Algorithm 1: SPBL for Classification |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | Three real-world image datasets are used. We choose the image data for experiments because the underlying patterns of image features tend to have rich nonlinear correlations. The three datasets are Caltech2561, Animal With Attributes (AWA)2, Corel10k3. All of them are publicly available and fully labeled with each sample belonging to only one class. |
| Dataset Splits | Yes | Caltech256 SP 256 29780 (50%/20%/30%) AWA Decaf 50 30475 (50%/20%/30%) Corel10k SP 100 10000 (50%/20%/30%) |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions features (spatial pyramid features, Decaf feature) and general techniques (logistic linear form for h(x)), but does not provide specific software names with version numbers. |
| Experiment Setup | Yes | We implement a grid search for the tuning of the hyperparameter. Further, in order to test the robustness of our model, we manually add label noise into the training set by randomly selecting and relabeling s% of the training samples with the other labels different from the true ones. We conduct experiments with s 2 {0, 5, 10, 15} for the three datasets. We adopt the strategy in [Jiang et al., 2014b] for the annealing of the SPL parameters (λ, ) (Line 10 to 12 in Algorithm 1). Specifically, at each iteration, we sort the samples in the ascending order of their losses, and set (λ, ) based on the number of samples to be selected by now. |