Trash to Treasure: Low-Light Object Detection via Decomposition-and-Aggregation

Authors: Xiaohan Cui, Long Ma, Tengyu Ma, Jinyuan Liu, Xin Fan, Risheng Liu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Plenty of experiments are conducted to reveal our superiority against other state-of-the-art methods. The code will be public if it is accepted. We conducted our experiments using the DARK FACE dataset, which consists of 6000 lowlight images captured in real-world environments.
Researcher Affiliation Academia 1School of Software Technology, Dalian University of Technology 2School of Mechanical Engineering, Dalian University of Technology malone94319@gmail.com, atlantis918@hotmail.com, {cuixiaohan1230, matengyu}@mail.dlut.edu.cn, {xin.fan, rsliu}@dlut.edu.cn
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. Figure 3 shows a computational flow diagram, not pseudocode.
Open Source Code No The code will be public if it is accepted.
Open Datasets Yes We conducted our experiments using the DARK FACE dataset, which consists of 6000 lowlight images captured in real-world environments. In order to fully verify the performance of detection, we presented more results on the Ex Dark (Loh and Chan 2019) low-light object detection dataset.
Dataset Splits No For our experiments, we randomly selected 1000 images for testing, while the remaining images were used for training. For this dataset, we set the maximum epoch as 100, and the batch size as 32. We used Adam and the learning rate was initialized to 3e-5. 737 images were randomly sampled for testing and the remaining 6626 low-light images were used for training and validation.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions optimizers like SGD and Adam, but does not provide specific version numbers for any software dependencies or libraries (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Parameters Setting For model training, we employed SGD with a momentum of 0.9 and weight decay of 0.0005. The batch size was set to 4, and the initial learning rate was 0.0005. For this dataset, we set the maximum epoch as 100, and the batch size as 32. We used Adam and the learning rate was initialized to 3e-5.