We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
基于生成对抗网络的时尚内容和风格迁移.
- Authors
丁文华; 杜军威; 侯磊; 刘金环
- Abstract
The generative adversarial network is often used for image conversion tasks such as image coloring, semantic composition, style transfer, etc. However, the training of image generation models at this stage often depends on a large number of paired datasets, and can only achieve the conversion between two image domains. To solve the above problems, a content and style transfer based on generative adversarial network (CS-GAN) is proposed. The model maximizes the mutual information between fashion items and generated images by using a contrastive learning framework, which can ensure that content migration can be achieved without changing the structure of fashion items. Through layer to layer dynamic convolution method, style features are adaptively learned for different style images to achieve arbitrary style migration of fashion items. Content features (such as monet style and cubism) and style features (such as color and texture) of imported fashion items are integrated to achieve conversion of multiple image domains. Comparative experiments and results analysis are conducted on the open fashion data set. Comparative experiments and results analysis are carried out on the public fashion data set. Compared with other mainstream methods, this method has improved in image synthesis quality, average Inception score and FID distance evaluation indicators.
- Publication
Journal of Computer Engineering & Applications, 2024, Vol 60, Issue 9, p261
- ISSN
1002-8331
- Publication type
Article
- DOI
10.3778/j.issn.1002-8331.2212-0265