We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
HISNet: a Human Image Segmentation Network aiding bokeh effect generation.
- Authors
Gupta, Shaurya; Vishwakarma, Dinesh Kumar
- Abstract
The bokeh effect in photography has gained unquestionable popularity since improvements in smartphone cameras, for this effect brings out the attention of the image onto the subject and enhances the overall quality of the photo. Generally, these effects are applicable via dual-lens cameras for auto-focusing onto the subject. However, smartphones with a single lens rely on software to generate such an effect. This paper proposes a deep learning pipeline to generate depth-aware segmentation maps in human images via segmentation and depth estimation networks. The given paper provides a concatenations-based decoder for segmentation applying and experimenting with features learned through state-of-the-art encoder architectures, further we form an encoding concatenation between two prominent encoders to provide an ensemble model for learning segments. Adding to the effect we use a prominent depth estimation architecture and combine it with our segmentation results to generate dept-aware segmentation maps for achieving photos with more focus on human subjects, where the out-of-focus regions appear to be blurred out. The methodology produces compelling bokeh effects, comparable with shots taken via a dual-lens mobile camera or DSLR. During the experimentations of human segmentation, some benchmark results are reported with our best-considered model. Training on Supervisely Persons dataset achieved an IOU score of 95.88%, whereas training the same network on the EG1800 dataset achieved a state-of-the-art IOU of 96.89%. The final segmentation model thus provided some very closely accurate segmentation maps suitable for our task.
- Subjects
DEEP learning; IMAGE segmentation; BOKEH (Photography); DIGITAL single-lens reflex cameras; HUMAN experimentation; HUMAN beings
- Publication
Multimedia Tools & Applications, 2023, Vol 82, Issue 8, p12469
- ISSN
1380-7501
- Publication type
Article
- DOI
10.1007/s11042-022-13900-1