Symmetry-Aware Transformer-Based Mirror Detection
Authors: Tianyu Huang, Bowen Dong, Jiaying Lin, Xiaohui Liu, Rynson W.H. Lau, Wangmeng Zuo
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that SATNet outperforms both RGB and RGB-D mirror detection methods on all available mirror detection datasets. |
| Researcher Affiliation | Academia | 1 Harbin Institute of Technology 2 City University of Hong Kong 3 Peng Cheng Laboratory |
| Pseudocode | No | The paper describes the architecture and operations using text and mathematical equations, but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | Following previous works (Yang et al. 2019; Lin, Wang, and Lau 2020), we use Mirror Segmentation Dataset (MSD) and Progressive Mirror Dataset (PMD) to evaluate our method. Besides, we adopt an RGB-D dataset RGBD-Mirror to make a comparison with the state-of-the-art RGB-D mirror detection method PDNet (Mei et al. 2021). |
| Dataset Splits | No | The paper mentions augmenting 'training images' and 'testing' with no explicit breakdown of a validation set or specific split percentages for training, validation, and test sets. It states 'And for testing, we simply resize input images to 512 × 512 to evaluate our network,' implying testing is on a separate set, but no explicit validation split. |
| Hardware Specification | Yes | Our network is trained on 8 Tesla V100 GPUs with 2 images per GPU for 20K iterations. |
| Software Dependencies | No | We implement our network on Py Torch (Paszke et al. 2019)... |
| Experiment Setup | Yes | Our network is trained on 8 Tesla V100 GPUs with 2 images per GPU for 20K iterations. During training, we use ADAM weight decay optimizer and set β1, β2, and the weight decay to 0.9, 0.999, and 0.01, respectively. The learning rate is initialized to 6 × 10−4 and decayed by the poly strategy with the power of 1.0. |