Paper in CVPR 2019 on “Audio visual scene-aware dialog”


We introduce the task of scene-aware dialog. Our goal is to generate a complete and natural response to a question about a scene, given video and audio of the scene and the history of previous turns in the dialog. To answer successfully, agents must ground concepts from the question in the video while leveraging contextual cues from the dialog history. To benchmark this task, we introduce the Audio Visual Scene-Aware Dialog (AVSD) Dataset. For each of more than 11,000 videos of human actions from the Charades dataset, our dataset contains a dialog about the video, plus a final summary of the video by one of the dialog participants. We train several baseline systems for this task and evaluate the performance of the trained models using both qualitative and quantitative metrics. Our results indicate that models must utilize all the available inputs (video, audio, question, and dialog history) to perform best on this dataset.


  • H. Alamri, V. Cartillier, A. Das, J. Wang, A. Cherian, I. Essa, D. Batra, T. K. Marks, C. Hori, P. Anderson, S. Lee, and D. Parikh (2019), “Audio Visual Scene-Aware Dialog,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. [PDF] [DOI] [arXiv] [BIBTEX]
    @InProceedings{ 2019-Alamri-AVSD,
    arxiv = {},
    author  = {Alamri, Huda and Cartillier, Vincent and Das,
    Abhishek and Wang, Jue and Cherian, Anoop and Essa,
    Irfan and Batra, Dhruv and Marks, Tim K. and Hori,
    Chiori and Anderson, Peter and Lee, Stefan and
    Parikh, Devi},
    booktitle  = {{IEEE Conference on Computer Vision and Pattern
    Recognition (CVPR)}},
    doi = {10.1109/CVPR.2019.00774},
    month = {June},
    pdf = {},
    title = {{A}udio {V}isual {S}cene-{A}ware {D}ialog},
    year = {2019}

Additional information available from the CVPR 2019 Open Access Site

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.