Background-Mixed Augmentation for Weakly Supervised Change Detection
Authors: Rui Huang, Ruofei Wang, Qing Guo, Jieda Wei, Yuxiang Zhang, Wei Fan, Yang Liu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments in two public datasets and enhance four state-of-the-art methods, demonstrating the advantages of our method. |
| Researcher Affiliation | Academia | 1 School of Computer Science and Technology, Civil Aviation University of China, China 2 Center for Frontier AI Research (CFAR), A*STAR, Singapore 3 Zhejiang Sci-Tech University, China 4 Nanyang Technological University, Singapore |
| Pseudocode | Yes | Algorithm 1: Learning CD models via BGMix |
| Open Source Code | Yes | We release the code at https://github.com/tsingqguo/bgmix. |
| Open Datasets | Yes | We conduct the experiments on two widely used remote sensing CD datasets, i.e. AICD (Bourdis, Marraud, and Sahbi 2011) and BCD (Ji, Wei, and Lu 2018). |
| Dataset Splits | No | AICD has 1000 aerial image pairs with the resolution of 800 × 600. We randomly select 900 image pairs as the training dataset and the rest for testing. ... We randomly select 90% from the cropped image pairs as the training dataset and the left is used as the testing set. The paper specifies train and test splits for the datasets but does not explicitly mention a separate validation set or how it was used for model selection/hyperparameter tuning. |
| Hardware Specification | Yes | All experiments in the subsequent section are conducted on a 24GB RTX3090 GPU. |
| Software Dependencies | No | The proposed method is implemented with Pytorch (Paszke et al. 2019). While Pytorch is mentioned, a specific version number is not provided, nor are other software dependencies with their versions. |
| Experiment Setup | Yes | We set the number of augmentation paths K to 4 and the augmentation operation set O with eight kinds of operations, i.e., O={background-aware operation, auto contrast, equalize, posterize, rotate, solarize, shear, translate}. The hyper-parameters λ1, λ2, λ3, λ4 and λ5 are set to 1, 1, 5, 1 and 0.01 for AICD, and 1, 1, 3, 1 and 0.01 for BCD, respectively. ... We use SGD to optimize the network parameters. The batch size, learning rate, and momentum are set to 4, 1e-4, and 0.5, respectively. The largest iteration number is 100k. |