We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
Dual-Discriminator Generative Adversarial Network with Uniform Color Information Extraction for Color Constancy.
- Authors
Huiting Xu; Zhenshan Tan; Zhijiang Li; Shuying Lyu
- Abstract
Generative adversarial network (GAN) has attracted extensive attention in color constancy because it allows pixel-wise supervision. However, the misinterpretation of color features and the low sensitivity of the discriminator caused by strong correlation of multiple features limit the learning capability of GAN. To address these issues, we propose a dual-discriminator generative adversarial network (DDGAN), which includes a color feature learning (CFL) module, a feature fusion discriminator (FFD) module and a global consistency constraint (GCC) module. First, CFL pays attention to regions with uniform color to enable the generator to learn distinguishable color information. Second, FFD is a discriminator module that contains two feature extraction branches; one extracts color features and the other extracts globally correlated features. These features are then fused to weaken structural features and enhance the discriminator's sensitivity to color features. Finally, GCC imposes global consistency constraints to reconsider the structural features weakened by FFD and unify structural features and color features, aiming to obtain more uniform images of colors and contents. Extensive experiments on the ColorChecker RECommended dataset, NUS 8-Camera and Cube datasets show that our DDGAN outperforms other GAN-based methods in terms of five popular metrics.
- Subjects
GENERATIVE adversarial networks; DATA mining; STRUCTURAL colors; FEATURE extraction
- Publication
Journal of Imaging Science & Technology, 2024, Vol 68, Issue 2, p1
- ISSN
1062-3701
- Publication type
Article
- DOI
10.2352/J.ImagingSci.Technol.2024.68.2.020401