Featured Publications

Gradient-based meta-learning approaches have been successful in few-shot learning, transfer learning, and a wide range of other domains. Despite its efficacy and simplicity, the burden of calculating the Hessian matrix with large memory footprints is the critical challenge in large-scale applications. To tackle this issue, we propose a simple yet straightforward method to reduce the cost by reusing the same gradient in a window of inner steps. We describe the dynamics of the multi-step estimation in the Lagrangian formalism and discuss how to reduce evaluating second-order derivatives estimating the dynamics. To validate our method, we experiment on meta-transfer learning and few-shot learning tasks for multiple settings. The experiment on meta-transfer emphasizes the applicability of training meta-networks, where other approximations are limited. For few-shot learning, we evaluate time and memory complexities compared with popular baselines. We show that our method significantly reduces training time and memory usage, maintaining competitive accuracies, or even outperforming in some cases.
In arXiv, 2020

Visual dialog is a task of answering a sequence of questions grounded in an image utilizing a dialog history. Previous studies have implicitly explored the problem of reasoning semantic structures among the history using softmax attention. However, we argue that the softmax attention yields dense structures that could distract to answer the questions requiring partial or even no contextual information. In this paper, we formulate the visual dialog tasks as graph structure learning tasks. To tackle the problem, we propose Sparse Graph Learning Networks (SGLNs) consisting of a multimodal node embedding module and a sparse graph learning module. The proposed model explicitly learn sparse dialog structures by incorporating binary and score edges, leveraging a new structural loss function. Then, it finally outputs the answer, updating each node via a message passing framework. As a result, the proposed model outperforms the state-of-the-art approaches on the VisDial v1.0 dataset, only using 10.95% of the dialog history, as well as improves interpretability compared to baseline methods.
In arXiv, 2020

In this work, we propose a goal-driven collaborative task that combines language, perception, and action. Specifically, we develop a Collaborative image-Drawing game between two agents, called CoDraw. Our game is grounded in a virtual world that contains movable clip art objects. The game involves two players: a Teller and a Drawer. The Teller sees an abstract scene containing multiple clip art pieces in a semantically meaningful configuration, while the Drawer tries to reconstruct the scene on an empty canvas using available clip art pieces. The two players communicate with each other using natural language. We collect the CoDraw dataset of ~10K dialogs consisting of ~138K messages exchanged between human players. We define protocols and metrics to evaluate learned agents in this testbed, highlighting the need for a novel crosstalk evaluation condition which pairs agents trained independently on disjoint subsets of the training data. We present models for our task and benchmark them using both fully automated evaluation and by having them play the game live with humans.
In ACL, 2019

We propose a video story question-answering (QA) architecture, Multimodal Dual Attention Memory (MDAM). The key idea is to use a dual attention mechanism with late fusion. MDAM uses self-attention to learn the latent concepts in scene frames and captions. Given a question, MDAM uses the second attention over these latent concepts. Multimodal fusion is performed after the dual attention processes (late fusion). Using this processing pipeline, MDAM learns to infer a high-level vision-language joint representation from an abstraction of the full video content. We evaluate MDAM on PororoQA and MovieQA datasets which have large-scale QA annotations on cartoon videos and movies, respectively. For both datasets, MDAM achieves new state-of-the-art results with significant margins compared to the runner-up models. We confirm the best performance of the dual attention mechanism combined with late fusion by ablation studies. We also perform qualitative analysis by visualizing the inference mechanisms of MDAM.
In ECCV, 2018

In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets. With a simple ensemble of BANs, we won the runners-up in 2018 VQA Challenge while BAN was a winner of single models among the entries.
In NeurIPS, 2018

Recent Publications

. Multi-step Estimation for Gradient-based Meta-learning. In arXiv, 2020.

Preprint

. DialGraph: Sparse Graph Learning Networks for Visual Dialog. In arXiv, 2020.

Preprint

. CoDraw: Collaborative Drawing as a Testbed for Grounded Goal-driven Communication. In ACL, 2019.

Preprint Code

. Multimodal Dual Attention Memory for Video Story Question Answering. In ECCV, 2018.

PDF

. Bilinear Attention Networks. In NeurIPS, 2018.

Preprint Code Poster Slides Video

. Overcoming Catastrophic Forgetting by Incremental Moment Matching. In NIPS (Spotlight), 2017.

Preprint PDF Code Poster

. Hadamard Product for Low-rank Bilinear Pooling. In ICLR, 2017.

Preprint PDF Code Slides

. Multimodal Residual Learning for Visual QA. In NIPS, 2016.

Preprint PDF Code Poster Video

Recent & Upcoming Talks

Advances in Learning Multimodal Attention Networks
Aug 21, 2020 10:00 AM
Learning Representations of Vision and Language
Oct 28, 2019 9:00 AM
Advances in Attention Networks
Sep 18, 2019 3:30 PM
Advances in Attention Networks
Aug 13, 2019 3:00 PM
Bilinear Attention Networks for VizWiz Grand Challenge 2018
Sep 14, 2018 10:50 AM
Multimodal Deep Learning
Sep 6, 2018 2:50 PM
Multimodal Deep Learning for Visually-Grounded Reasoning
Jun 27, 2018 1:00 PM
Visually-Grounded Question and Answering: from VisualQA to MovieQA
Jun 26, 2018 8:45 AM
Bilinear Attention Networks for Visual Question Answering
Jun 18, 2018 11:35 AM
Multimodal Deep Learning for Visually-Grounded Reasoning
Mar 28, 2018 1:00 PM

Recent Posts

Contact