Multimodal Attention Variants for Visual Question Answering

No Thumbnail Available
Date
2023
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Visual Question Answering (VQA) is an exciting field of research that involves answering natural language questions asked about an image. This multimodal task requires models to understand the syntax and semantics of the question, interact with the relevant objects in the image, and infer the answer using both image and text semantics. Due to its complex behavior, VQA has gained considerable attention from both vision and natural language research community.
Description
Supervisors: Anand, Ashish and Guha, Prithwijit
Keywords
Department of Computer Science and Engineering
Citation