We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
Pose-guided feature region-based fusion network for occluded person re-identification.
- Authors
Xie, Gengsheng; Wen, Xianbin; Yuan, Liming; Wang, Jianchen; Guo, Changlun; Jia, Yansong; Li, Minghao
- Abstract
Learning distinguishing features from training datasets while filtering features of occlusions is critical to person retrieval scenarios. Most of the current person re-identification (Re-ID) methods based on classification or deep metric representation learning tend to overlook occlusion issues on the training set. Such representations from obstacles are easily over-fitted and misleading due to being considered as a part of the human body. To alleviate the occlusion problem, we propose a pose-guided feature region-based fusion network (PFRFN), to utilize pose landmarks as guidance to guide local learning for a good property of local feature, and the representation learning risk is evaluated on each part loss separately. Compared with only using global classification loss, concurrently considering local loss and the results of robust pose estimation enable the deep network to learn the representations of the body parts that prominently displayed in the image and gain the discriminative faculties on occluded scenes. Experimental results on multiple datasets, i.e., Market-1501, DukeMTMC, CUHK03, demonstrate the effectiveness of our method in a variety of scenarios.
- Subjects
HUMAN body
- Publication
Multimedia Systems, 2023, Vol 29, Issue 3, p1771
- ISSN
0942-4962
- Publication type
Article
- DOI
10.1007/s00530-021-00752-2