Cascading Convolutional Color Constancy
Authors: Huanglin Yu, Ke Chen, Kaiqi Wang, Yanlin Qian, Zhaoxiang Zhang, Kui Jia12725-12732
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the public Color Checker and NUS 8-Camera benchmarks demonstrate superior performance of the proposed algorithm in comparison with the state-of-the-art methods, especially for more difficult scenes. |
| Researcher Affiliation | Academia | Huanglin Yu,1 Ke Chen,1, Kaiqi Wang,1 Yanlin Qian,2 Zhaoxiang Zhang,3 Kui Jia1 1South China University of Technology, 2Tampere University, 3Chinese Academy of Sciences |
| Pseudocode | No | The paper describes the network structure and loss function in detail but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Source codes and pre-trained models are available at https://github.com/yhlscut/C4. |
| Open Datasets | Yes | We conduct experimental evaluation on two public color constancy benchmarks: the NUS 8-Camera dataset (Cheng, Prasad, and Brown 2014) and the re-processed Color Checker dataset (Shi 2000). |
| Dataset Splits | Yes | Following (Chen et al. 2019; Qian et al. 2019; Barron 2015), we adopt three-fold cross-validation on both datasets in all experiments. |
| Hardware Specification | No | The paper does not specify the exact hardware (e.g., GPU models, CPU types, memory) used for running the experiments. It only mentions using AlexNet and SqueezeNet as backbones. |
| Software Dependencies | No | The paper mentions the ADAM algorithm and backbone network architectures like AlexNet and SqueezeNet, but does not provide specific version numbers for software dependencies such as deep learning frameworks (e.g., TensorFlow, PyTorch), programming languages (e.g., Python), or other libraries. |
| Experiment Setup | Yes | During training, the ADAM algorithm (Kingma and Ba 2014) is employed to train the model with a fixed batch size (i.e. 16 in our experiments), and the learning rate is set to 3 10 4 and 1 10 4 for our C4 model based on the Squeeze Net and Alex Net backbone respectively. |