Learning Loss for Test-Time Augmentation
Authors: Ildoo Kim, Younghoon Kim, Sungwoong Kim
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on several image classification benchmarks show that the proposed instance-aware testtime augmentation improves the model s robustness against various corruptions. |
| Researcher Affiliation | Collaboration | Ildoo Kim Kakao Brain ildoo.kim@kakaobrain.com Younghoon Kim Sungshin Women s University yhkim@sungshin.ac.kr Sungwoong Kim Kakao Brain swkim@kakaobrain.com |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. The provided links are for third-party implementations used for comparison or reference. |
| Open Datasets | Yes | We choose CIFAR100 [25] and Image Net [6] as standard classification benchmarks and use their corrupted variants, CIFAR-100-C and Image Net-C [19]. |
| Dataset Splits | Yes | Let s say we have a training dataset Dtrain, a validation dataset Dvalid, and a fully-trained target network Θtarget. The weights of the target network are trained on Dtrain and its performance is evaluated on Dvalid as an ordinary learning scheme. We freeze the target network Θtarget and split Dtrain into two folds, Dloss train and Dloss valid. The first fold will be used to train the loss prediction module and another one will be used to validate the performance. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models or processor types used for running its experiments. |
| Software Dependencies | No | The paper mentions 'PyTorch' [37] but does not provide a specific version number or other detailed software dependencies with versions required to replicate the experiments. |
| Experiment Setup | No | The paper refers to Appendix A.3 for implementation details, but these details (such as specific hyperparameter values or training configurations) are not present in the provided main text. |