Multimodal Attention Variants for Visual Question Answering
No Thumbnail Available
Visual Question Answering (VQA) is an exciting field of research that involves answering natural language questions asked about an image. This multimodal task requires models to understand the syntax and semantics of the question, interact with the relevant objects in the image, and infer the answer using both image and text semantics. Due to its complex behavior, VQA has gained considerable attention from both vision and natural language research community.
Supervisors: Anand, Ashish and Guha, Prithwijit
Department of Computer Science and Engineering