We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Non-linear integration of loss terms for improved new view synthesis.
- Authors
El-Shazly, Ehab H.; Abdelhakim, Assem; Zhang, Xiaoyan; Fares, Ahmed
- Abstract
New view synthesis problem can be tackled through different approaches, depending on the availability of either single or multiple images as input to the system. Previous methods for new view synthesis can be divided into image-based rendering methods (e.g., flow prediction) or pixel generation methods. While directly regressing pixels for new view synthesize generates structurally consistent results, it generates blurry images. On the other hand, flow prediction can generate realistic texture but it is unable to generate regions that are not available in the source image(s). We propose a deep framework that consist of a combination of both flow prediction module and recurrent pixel generator module to achieve improved performance via learning on both modules. While the flow prediction module estimates a dense flow field that is used to sample new target image from the given source image using spatial transform network, the pixel generation module is trained to directly synthesize a target image given a set of source images as it progressively refines and improves its prediction. In addition, we introduce a non-linear combination of loss terms to optimize the learning process. We highlight those parts of the scenes that are not common across two synthesized scenes and thus complete the fusion over all the learning modules.
- Subjects
LEARNING modules; PIXELS
- Publication
Multimedia Tools & Applications, 2024, Vol 83, Issue 22, p62089
- ISSN
1380-7501
- Publication type
Article
- DOI
10.1007/s11042-023-16265-1