We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Deep multimodal representation learning for generalizable person re-identification.
- Authors
Xiang, Suncheng; Chen, Hao; Ran, Wei; Yu, Zefang; Liu, Ting; Qian, Dahong; Fu, Yuzhuo
- Abstract
Person re-identification plays a significant role in realistic scenarios due to its various applications in public security and video surveillance. Recently, leveraging the supervised or semi-unsupervised learning paradigms, which benefits from the large-scale datasets and strong computing performance, has achieved a competitive performance on a specific target domain. However, when Re-ID models are directly deployed in a new domain without target samples, they always suffer from considerable performance degradation and poor domain generalization. To address this challenge, we propose a Deep Multimodal Representation Learning network to elaborate rich semantic knowledge for assisting in representation learning during the pre-training. Importantly, a multimodal representation learning strategy is introduced to translate the features of different modalities into the common space, which can significantly boost generalization capability of Re-ID model. As for the fine-tuning stage, a realistic dataset is adopted to fine-tune the pre-trained model for better distribution alignment with real-world data. Comprehensive experiments on benchmarks demonstrate that our method can significantly outperform previous domain generalization or meta-learning methods with a clear margin. Our source code will also be publicly available at https://github.com/JeremyXSC/DMRL.
- Subjects
KNOWLEDGE representation (Information theory); VIDEO surveillance; SOURCE code; LEARNING; MULTIMODAL user interfaces; LEARNING strategies; SUPERVISED learning
- Publication
Machine Learning, 2024, Vol 113, Issue 4, p1921
- ISSN
0885-6125
- Publication type
Article
- DOI
10.1007/s10994-023-06352-7