Accounting for Focus Ambiguity in Visual Questions

University of Texas at Austin University of Colorado Boulder

Focus Ambiguity

Overview

No existing work on visual question answering explicitly accounts for ambiguity regarding where the content described in the question is located in the image. To fill this gap, we introduce VQ-FocusAmbiguity, the first VQA dataset that visually grounds each region described in the question that is necessary to arrive at the answer. We then provide analysis showing how our dataset for visually grounding `questions' is distinct from visually grounding `answers', and characterize the properties of the questions and segmentations provided in our dataset. Finally, we benchmark modern models for two novel tasks: recognizing whether a visual question has focus ambiguity and localizing all plausible focus regions within the image. Results show that the dataset is challenging for modern models.

Contact

For any questions, comments, or feedback, please send them to Chongyan Chen at chongyanchen_hci@utexas.edu.