Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language
Authors
Authors
- Mingyu Ding
- Zhenfang Chen
- Tao Du
- Ping Luo
- Joshua Tenenbaum
- Chuang Gan
Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language
Authors
- Mingyu Ding
- Zhenfang Chen
- Tao Du
- Ping Luo
- Joshua Tenenbaum
- Chuang Gan
Published on
10/28/2021
In this work, we propose a unified framework, called Visual Reasoning with Differ-entiable Physics (VRDP), that can jointly learn visual concepts and infer physics models of objects and their interactions from videos and language. This is achieved by seamlessly integrating three components: a visual perception module, a concept learner, and a differentiable physics engine. The visual perception module parses each video frame into object-centric trajectories and represents them as latent scene representations. The concept learner grounds visual concepts (e.g., color, shape, and material) from these object-centric representations based on the language, thus providing prior knowledge for the physics engine. The differentiable physics model, implemented as an impulse-based differentiable rigid-body simulator, performs differentiable physical simulation based on the grounded concepts to infer physical properties, such as mass, restitution, and velocity, by fitting the simulated trajectories into the video observations. Consequently, these learned concepts and physical models can explain what we have seen and imagine what is about to happen in future and counterfactual scenarios. Integrating differentiable physics into the dynamic reasoning framework offers several appealing benefits. More accurate dynamics prediction in learned physics models enables state-of-the-art performance on both synthetic and real-world benchmarks while still maintaining high transparency and interpretability; most notably, VRDP improves the accuracy of predictive and counterfactual questions by 4.5% and 11.5% compared to its best counterpart. VRDP is also highly data-efficient: physical parameters can be optimized from very few videos, and even a single video can be sufficient. Finally, with all physical parameters inferred, VRDP can quickly learn new concepts from a few examples.
This paper has been published at NeurIPS 2021
Please cite our work using the BibTeX below.
@misc{ding2021dynamic,
title={Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language},
author={Mingyu Ding and Zhenfang Chen and Tao Du and Ping Luo and Joshua B. Tenenbaum and Chuang Gan},
year={2021},
eprint={2110.15358},
archivePrefix={arXiv},
primaryClass={cs.CV}
}