We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Multimodal sensor fusion in the latent representation space.
- Authors
Piechocki, Robert J.; Wang, Xiaoyang; Bocus, Mohammud J.
- Abstract
A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations.
- Subjects
DETECTORS; MULTIMODAL user interfaces; CLASSIFICATION
- Publication
Scientific Reports, 2023, Vol 13, Issue 1, p1
- ISSN
2045-2322
- Publication type
Article
- DOI
10.1038/s41598-022-24754-w