ChartCap: Mitigating Hallucination of Dense Chart Captioning

Seoul National University
ICCV 2025 Highlight Poster
ChartCap teaser
Comparison of the original caption and our ChartCap caption.
The chart is sourced from Burghardt & Hartmann (2007) , collected by Li et al. (2024) , and included in ChartCap.

Abstract

Generating accurate, informative, and hallucination-free captions for charts remains challenging for vision language models, primarily due to the lack of large-scale, high-quality datasets of real-world charts. However, existing real-world chart datasets suffer from the inclusion of extraneous information that cannot be inferred from the chart and failure to sufficiently capture structural elements and key insights.

Therefore, we introduce ChartCap, a large-scale dataset of 565K real-world chart images paired with type-specific, dense captions that exclude extraneous information and highlight both structural elements and key insights in detail. To build ChartCap, we design a four-stage pipeline that generates captions using only the discernible data from the chart and employ a cycle consistency-based human verification, which accelerates quality control without sacrificing accuracy.

Additionally, we propose a novel metric, the Visual Consistency Score, which evaluates caption quality by measuring the similarity between the chart regenerated from a caption and the original chart, independent of reference captions.

Extensive experiments confirm that models fine-tuned on ChartCap consistently generate more accurate and informative captions with reduced hallucinations, surpassing both open-source and proprietary models and even human-annotated captions.

The ChartCap Dataset

ChartCap pipeline
Four-stage pipeline of ChartCap.

Building a large-scale chart dataset with high-quality captions requires both a clear schema and an automated yet reliable pipeline. ChartCap introduces a type-specific caption schema across nine chart types, defining structural descriptions and key insights guided by prior work in data visualization and visualization literacy. Our four-stage pipeline filters non-charts, classifies chart type and title, extracts structural and semantic information, and finally generates sentence-level captions. To guarantee quality, we employ a cycle-consistency-based human verification process, reconstructing charts from captions to efficiently validate correctness and informativeness.

Cycle-consistency human verification
Original chart (left) compared with a reconstructed chart (right) from caption-generated code.

The Visual Consistency Score

To evaluate caption quality automatically, we introduce the Visual Consistency Score (VCS). Each caption is translated into Matplotlib code to regenerate a chart, which is then compared with the original using a vision encoder. In parallel, OCRScore measures how faithfully textual elements are preserved via OCR-based precision and recall. Together, VCS and OCRScore achieve the highest agreement with human judgment, offering scalable and reliable metrics for chart caption evaluation.

Experiments

Results on ChartCap
Results on ChartCap test set.
Results on VisText
Zero-shot evaluation on VisText.
Results on Chart-to-Text
Zero-shot evaluation on Chart-to-Text.

Models fine-tuned on ChartCap achieve state-of-the-art performance on ChartCap test set, VisText, and Chart-to-Text benchmarks.

Qualitative Examples

Qualitative results from VisText
Qualitative examples comparing (a) the ground-truth chart image from VisText and their reconstructed charts from the captions of (b) human-authored ground-truth, (c) Phi3.5-Vision-4BChartCap, and (d) Claude 3.5 Sonnet.

BibTeX

@inproceedings{lim2025chartcap,
  title     = {ChartCap: Mitigating Hallucination of Dense Chart Captioning},
  author    = {Junyoung Lim and Jaewoo Ahn and Gunhee Kim},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      = {2025},
}