Domain Generalization with Vital Phase Augmentation

Authors: Ingyun Lee, Wooju Lee, Hyun Myung

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present experimental evaluations of our proposed approach, which exhibited improved performance for both clean and corrupted data. VIPAug achieved SOTA performance on the benchmark CIFAR-10 and CIFAR-100 datasets, as well as near-SOTA performance on the Image Net-100 and Image Net datasets.
Researcher Affiliation Academia Urban Robotics Lab, School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Republic of Korea
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/excitedkid/vipaug.
Open Datasets Yes We experimentally evaluated the performance of VIPAug on the most widely used CIFAR-10, CIFAR100, Image Net-100, and Image Net datasets. CIFAR-10 and CIFAR-100 comprise 50,000 training images and 10,000 testing images... Image Net consists of 1.2 million images and 1,000 classes. Image Net-100 consists of 100 randomly selected classes of Image Net.
Dataset Splits No Detailed training setup can be seen in the supplementary material.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers.
Experiment Setup Yes We trained all methods for 250 epochs. Detailed training setup can be seen in the supplementary material. We used the 2 2 1 argmax filter, and set σvital = 0.001 and σnonvital = 0.014 on CIFAR-10 and σvital = 0.005 and σnonvital = 0.012 on CIFAR-100.