Domain Adversarial Learning for Color Constancy
Authors: Zhifeng Zhang, Xuejing Kang, Anlong Ming
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that the proposed DALCC can more effectively take advantage of multi-domain data and thus achieve state-of-the-art performance on commonly used benchmark datasets. |
| Researcher Affiliation | Academia | School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications {zhangzhifeng, kangxuejing, mal}@bupt.edu.cn |
| Pseudocode | No | The paper describes the proposed modules textually but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link for open-sourcing the code for the described methodology. |
| Open Datasets | Yes | We verify the effectiveness of our proposed DALCC on two public datasets, the NUS-8 dataset[Cheng et al., 2014] and the Cube+ dataset[Bani c et al., 2017]. |
| Dataset Splits | Yes | Following[Tang et al., 2022; Xiao et al., 2020], we adopt the three-fold crossvalidation in all experiments. |
| Hardware Specification | No | The paper mentions implementing the network on Pytorch with CUDA support, implying GPU usage, but does not specify any particular GPU models, CPU, or other hardware details used for experiments. |
| Software Dependencies | No | The paper mentions 'Pytorch with CUDA support' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We train our model about 3000 epochs by setting the learning rate to 1 10 4. The batch size is 16. We use Adam to optimize the network. |