We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Deep Supervised Hashing by Fusing Multiscale Deep Features for Image Retrieval †.
- Authors
Redaoui, Adil; Belalia, Amina; Belloulata, Kamel
- Abstract
Deep network-based hashing has gained significant popularity in recent years, particularly in the field of image retrieval. However, most existing methods only focus on extracting semantic information from the final layer, disregarding valuable structural information that contains important semantic details, which are crucial for effective hash learning. On the one hand, structural information is important for capturing the spatial relationships between objects in an image. On the other hand, image retrieval tasks often require a more holistic representation of the image, which can be achieved by focusing on the semantic content. The trade-off between structural information and image retrieval accuracy in the context of image hashing and retrieval is a crucial consideration. Balancing these aspects is essential to ensure both accurate retrieval results and meaningful representation of the underlying image structure. To address this limitation and improve image retrieval accuracy, we propose a novel deep hashing method called Deep Supervised Hashing by Fusing Multiscale Deep Features (DSHFMDF). Our approach involves extracting multiscale features from multiple convolutional layers and fusing them to generate more robust representations for efficient image retrieval. The experimental results demonstrate that our method surpasses the performance of state-of-the-art hashing techniques, with absolute increases of 11.1% and 8.3% in Mean Average Precision (MAP) on the CIFAR-10 and NUS-WIDE datasets, respectively.
- Subjects
IMAGE retrieval; IMAGE representation; INFORMATION retrieval
- Publication
Information (2078-2489), 2024, Vol 15, Issue 3, p143
- ISSN
2078-2489
- Publication type
Article
- DOI
10.3390/info15030143