Efficient Mirror Detection via Multi-Level Heterogeneous Learning
Authors: Ruozhen He, Jiaying Lin, Rynson W.H. Lau
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Compared to the state-of-the-art method, Het Net runs 664% faster and draws an average performance gain of 8.9% on MAE, 3.1% on Io U, and 2.0% on F-measure on two mirror detection benchmarks. We conduct experiments on two datasets: MSD (Yang et al. 2019) and PMD (Lin, Wang, and Lau 2020). |
| Researcher Affiliation | Academia | Department of Computer Science, City University of Hong Kong ruozhenhe2-c@my.cityu.edu.hk, jiayinlin5-c@my.cityu.edu.hk, rynson.lau@cityu.edu.hk |
| Pseudocode | No | The paper describes its method in text and with mathematical formulas, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/Catherine-R-He/Het Net. |
| Open Datasets | Yes | We conduct experiments on two datasets: MSD (Yang et al. 2019) and PMD (Lin, Wang, and Lau 2020). |
| Dataset Splits | No | The paper specifies training and testing sets for the datasets but does not explicitly detail a validation set split or methodology for it. |
| Hardware Specification | Yes | We implement our model by Py Torch and conduct experiments on a Ge Force RTX2080Ti GPU. |
| Software Dependencies | No | The paper states 'We implement our model by Py Torch' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We use the stochastic gradient descent (SGD) optimizer with a momentum value of 0.9 and a weight decay of 5e-4. In the training phase, the maximum learning rate is 1e-2, the batch size is 12, and the training epoch is 150. |