We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
StableNet: Distinguishing the hard samples to overcome language priors in visual question answering.
- Authors
Yu, Zhengtao; Zhao, Jia; Guo, Chenliang; Yang, Ying
- Abstract
With the booming fields of computer vision and natural language processing, cross‐modal intersections such as visual question answering (VQA) have become very popular. However, several studies have shown that many VQA models suffer from severe language prior problems. After a series of experiments, the authors found that previous VQA models are in an unstable state, that is, when training is repeated several times on the same dataset, there are significant differences between the distributions of the predicted answers given by the models each time, and these models also perform unsatisfactorily in terms of accuracy. The reason for model instability is that some of the difficult samples bring serious interference to model training, so we design a method to measure model stability quantitatively and further propose a method that can alleviate both model imbalance and instability phenomena. Precisely, the question types are classified into simple and difficult ones different weighting measures are applied. By imposing constraints on the training process for both types of questions, the stability and accuracy of the model improve. Experimental results demonstrate the effectiveness of our method, which achieves 63.11% on VQA‐CP v2 and 75.49% with the addition of the pre‐trained model.
- Subjects
QUESTION answering systems; NATURAL language processing; COMPUTER vision; VISUAL fields
- Publication
IET Computer Vision (Wiley-Blackwell), 2024, Vol 18, Issue 2, p315
- ISSN
1751-9632
- Publication type
Article
- DOI
10.1049/cvi2.12249