We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Relation-Aware Image Captioning with Hybrid-Attention for Explainable Visual Question Answering.
- Authors
YING-JIA LIN; CHING-SHAN TSENG; HUNG-Yu KAO
- Abstract
Recent studies leveraging object detection as the preliminary step for Visual Question Answering (VQA) ignore the relationships between different objects inside an image based on the textual question. In addition. the previous VQA models work like black-box functions, which means it is difficult to explain why a model provides such answers to the corresponding inputs. To address the issues above. we propose a new model structure to strengthen the representations for different objects and provide explainability for the VQA task. We construct a relation graph to capture the relative positions between region pairs and then create relation-aware visual features with a relation encoder based on graph attention networks. To make the final VQA predictions explainable. we introduce a multi-task learning framework with an additional explanation generator to help our model produce reasonable explanations. Simultaneously. the generated explanations are incorporated with the visual features using a novel Hybrid-Attention mechanism to enhance cross-modal understanding. Experiments show that the proposed method performs better on the VQA task than the several baselines. In addition, incorporation with the explanation generator can provide reasonable explanations along with the predicted answers.
- Subjects
EXPLANATION
- Publication
Journal of Information Science & Engineering, 2024, Vol 40, Issue 3, p649
- ISSN
1016-2364
- Publication type
Article
- DOI
10.6688/JISE.202405_40(3).0014