Introduction

Recent advances in vision-language pre-training (VLP) have demonstrated impressive performance in a range of vision-language (VL) tasks. However, there exist several challenges for measuring the community’s progress in building general multi-modal intelligence. First, most of the downstream VL datasets are annotated using raw images that are already seen during pre-training, which may result in an overestimation of current VLP models’ generalization ability. Second, recent VLP work mainly focuses on absolute performance but overlooks the efficiency-performance trade-off, which is also an important indicator for measuring progress. To this end, we introduce the Vision-Language Understanding Evaluation (VLUE) benchmark, a multi-task multi-dimension benchmark for evaluating the generalization capabilities and the efficiency-performance trade-off (“Pareto SOTA”) of VLP models. We demonstrate that there is a sizable generalization gap for all VLP models when testing on out-of-distribution test sets annotated on images from a more diverse distribution that spreads across cultures. Moreover, we find that measuring the efficiency-performance trade-off of VLP models leads to complementary insights for several design choices of VLP. We release the VLUE benchmark to promote research on building vision-language models that generalize well to images unseen during pre-training and are prac- tical in terms of efficiency-performance trade-off.

Links:   [Paper]   [Leaderboard]   [Data]   [Github]  

VLUE
We are looking for interns/FTEs at ByteDance AI-LAB (in Beijing / Shanghai)! If you are interested in working with us on vision language models, please send your resume to zhangxinsong.0320@bytedance.com

Key Challenges



Why is this problem hard?
To address these problems and promote research on truly generalizable and practical VLP, we introduce the Vision-Language Understanding Evaluation (VLUE) benchmark. VLUE is the first multi-task benchmark focusing on vision-language understanding that covers a set of fundamental VL tasks including image-text retrieval, visual question answering, visual reasoning, and visual grounding, and maintains a leaderboard tracking the performance of representative studies and new methods on VLP. More importantly, VLUE includes a newly annotated private out-of-distribution (OOD) test set for each representative VL task. In contrast to standard datasets for these tasks that are annotated on COCO/VG images, our private OOD test sets are annotated on images from the MaRVL (Liu et al., 2021a) dataset where images are manually collected across cultures by native speakers from different countries. This ensures that the image distribution in our OOD test sets differs from that of COCO/VG images. Moreover, we carefully control the annotation protocol for our OOD test sets to be identical to the original in-domain datasets. As such, the label distribution in our OOD test sets is roughly the same as the original test set but the image distribution differs. This enables us to better measure the true generalization and transferability of VLP models. In addition, we also encourage researchers to measure and compare the efficiency-performance trade-off when reporting new studies in the field of VLP. To facilitate that, we measure the efficiency-performance trade-off of representative VLP models in \benchmark to track a Pareto SOTA landscape for VLP research. In general, in contrast to conventional benchmarks that only capture the single performance metric, VLUE is a multi-dimension benchmark that takes multiple dimensions including performance, generalization ability, and efficiency into account. Hopefully, this will promote research on VLP models that are environmentally friendly and practical for real-world applications. We evaluate a range of representative VLP models on \benchmark to facilitate future research and analyze their generalization ability and efficiency-performance trade-off with respect to several key design choices. We find that there is a sizable generalization gap for all VLP models when evaluating on new examples annotated with images from in-the-wild distribution. Also, compared to focusing on a single dimension (i.e., absolute performance), measuring the generalization ability of different models can lead to complementary and even controversial conclusions. We also find that models with similar performance may result in completely different positions in the Pareto front measuring the efficiency-performance trade-off of VLP models, which also demonstrates the necessity of a multi-dimension benchmark for evaluating VLP models.

Misc.

Citation

@article{zhou2022vlue,
	author    = {Wangchunshu Zhou and Yan Zeng and Shizhe Diao and Xinsong Zhang},
	title     = {VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models},
	journal   = {CoRR},
	volume    = {abs/2205.15237},
	year      = {2022},
	archivePrefix = {arXiv},
	eprint    = {2205.15237}
}