A Comprehensive Evaluation for
Large Vision-Language Models

Logo of LVLM-eHub

Video Demonstrations

Video demonstrations of specific abilities

More LVLM-eHub Features

Comprehensive Evaluation for LVLMs
Open-source Evaluation Tools

Multimodal Arena

Multimodal Arena features anonymous randomized pairwise LVLM battles, providing the user-level evaluation of LVLMs in an open-world scenario.

Quantitative Evaluation

Quantitative Evaluation extensively evaluates six categories of multimodal capabilities on more than 40 benchmarks.

Open-source Project

LVLM-eHub provides a foundational framework for the assessment of LVLMs. The project is publicly available at the GitHub page.

One-click Evaluation

LVLM-eHub provides many user-friendly evaluation tools. The question generator and one-click evaluation will be available soon.