Self-Joint Supervised Learning
Authors: Navid Kardan, Mubarak Shah, Mitch Hill
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark image datasets show our method offers significant improvement over standard supervised learning in terms of accuracy, robustness against adversarial attacks, out-of-distribution detection, and overconfidence mitigation. |
| Researcher Affiliation | Academia | Center for Research in Computer Vision Department of Computer Science Department of Statistics and Data Science University of Central Florida |
| Pseudocode | Yes | Algorithm 1 Training procedure for self-joint learning framework |
| Open Source Code | Yes | Code: github.com/ndkn/Self-joint-Learning |
| Open Datasets | Yes | In this section, we evaluate the proposed framework on four visual classification tasks, i.e., CIFAR10, CIFAR-100, SVHN, and STL-10. ... Table 4 shows the characteristics of these datasets, along with their corresponding auxiliary dataset in our experiments. |
| Dataset Splits | Yes | We always keep 10% of test data for validation set and report the final test accuracy on the rest. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, or cloud compute instances) used for running the experiments. |
| Software Dependencies | Yes | We implemented our code in Pytorch (Paszke et al., 2019). |
| Experiment Setup | Yes | To make general conclusions, we refrain from hyperparameter optimization for any specific dataset and apply almost the same configuration (data augmentation, training recipe, and model architecture) for all datasets (details in Appendix A.5). ... The learning rate always starts with 0.4 and it decays after every 10 epochs with a constant scale of 0.81. ... The batch size starts at 128 (256 for STL-10) and is doubled every 20 epochs. ... The optimizer is always SGD with Nestrov (momentum=0.9, and weight Decay=e 4). |