Datasets:
license: cc-by-nc-sa-4.0
pretty_name: INTERCHART
tags:
- charts
- visualization
- vqa
- multimodal
- question-answering
- reasoning
- benchmarking
- evaluation
task_categories:
- question-answering
- visual-question-answering
task_ids:
- visual-question-answering
language:
- en
dataset_info:
features:
- name: id
dtype: string
- name: subset
dtype: string
- name: context_format
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: images
sequence: string
- name: metadata
dtype: json
pretty_description: >
INTERCHART is a diagnostic benchmark for multi-chart visual reasoning across
three tiers: DECAF (decomposed single-entity charts), SPECTRA (synthetic
paired charts for correlated trends), and STORM (real-world chart pairs). The
dataset includes chart images and question–answer pairs designed to
stress-test cross-chart reasoning, trend correlation, and abstract numerical
inference.
INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information
🧩 Overview
INTERCHART is a multi-tier benchmark that evaluates how well vision-language models (VLMs) reason across multiple related charts, a crucial skill for real-world applications like scientific reports, financial analyses, and policy dashboards.
Unlike single-chart benchmarks, INTERCHART challenges models to integrate information across decomposed, synthetic, and real-world chart contexts.
Paper: INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information
📂 Dataset Structure
INTERCHART/
├── DECAF
│ ├── combined # Multi-chart combined images (stitched)
│ ├── original # Original compound charts
│ ├── questions # QA pairs for decomposed single-variable charts
│ └── simple # Simplified decomposed charts
├── SPECTRA
│ ├── combined # Synthetic chart pairs (shared axes)
│ ├── questions # QA pairs for correlated and independent reasoning
│ └── simple # Individual charts rendered from synthetic tables
├── STORM
│ ├── combined # Real-world chart pairs (stitched)
│ ├── images # Original Our World in Data charts
│ ├── meta-data # Extracted metadata and semantic pairings
│ ├── questions # QA pairs for temporal, cross-domain reasoning
│ └── tables # Structured table representations (optional)
Each subset targets a different level of reasoning complexity and visual diversity.
🧠 Subset Descriptions
1️⃣ DECAF — Decomposed Elementary Charts with Answerable Facts
- Focus: Factual lookup and comparative reasoning on simplified single-variable charts.
- Sources: Derived from ChartQA, ChartLlama, ChartInfo, DVQA.
- Content: 1,188 decomposed charts and 2,809 QA pairs.
- Tasks: Identify, compare, or extract values across clean, minimal visuals.
2️⃣ SPECTRA — Synthetic Plots for Event-based Correlated Trend Reasoning and Analysis
- Focus: Trend correlation and scenario-based inference between synthetic chart pairs.
- Construction: Generated via Gemini 1.5 Pro + human validation to preserve shared axes and realism.
- Content: 870 unique charts, 1,717 QA pairs across 333 contexts.
- Tasks: Analyze multi-variable relationships, infer trends, and reason about co-evolving variables.
3️⃣ STORM — Sequential Temporal Reasoning Over Real-world Multi-domain Charts
- Focus: Multi-step reasoning, temporal analysis, and semantic alignment across real-world charts.
- Source: Curated from Our World in Data with metadata-driven semantic pairing.
- Content: 648 charts across 324 validated contexts, 768 QA pairs.
- Tasks: Align mismatched domains, estimate ranges, and reason about evolving trends.
⚙️ Evaluation & Methodology
INTERCHART supports both visual and table-based evaluation modes.
Visual Inputs:
- Combined: Charts stitched into a unified image.
- Interleaved: Charts provided sequentially.
Structured Table Inputs:
Models can extract tables using tools like DePlot or Gemini Title Extraction, followed by table-based QA.Prompting Strategies:
- Zero-Shot
- Zero-Shot Chain-of-Thought (CoT)
- Few-Shot CoT with Directives (CoTD)
Evaluation Pipeline:
Multi-LLM semantic judging (Gemini 1.5 Flash, Phi-4, Qwen2.5) with majority voting to evaluate semantic correctness.
📊 Dataset Statistics
| Subset | Charts | Contexts | QA Pairs | Reasoning Type Examples |
|---|---|---|---|---|
| DECAF | 1,188 | 355 | 2,809 | Factual lookup, comparison |
| SPECTRA | 870 | 333 | 1,717 | Trend correlation, event reasoning |
| STORM | 648 | 324 | 768 | Temporal reasoning, abstract numerical inference |
| Total | 2,706 | 1,012 | 5,214 | — |
🚀 Usage
🔐 Access & Download Instructions
Use an access token as your Git credential when cloning or pushing to the repository.
- Install Git LFS
Download and install from https://git-lfs.com.
Then run:
git lfs install
- Clone the dataset repository
When prompted for a password, use your Hugging Face access token with write permissions.
You can generate one here: https://huggingface.co/settings/tokens
git clone [https://huggingface.co/datasets/interchart/Interchart](https://huggingface.co/datasets/interchart/Interchart)
- Clone without large files (LFS pointers only)
If you only want lightweight clones without downloading all image data:
GIT_LFS_SKIP_SMUDGE=1 git clone [https://huggingface.co/datasets/interchart/Interchart](https://huggingface.co/datasets/interchart/Interchart)
- Alternative: use the Hugging Face CLI Make sure the CLI is installed:
pip install -U "huggingface_hub[cli]"
Then download directly:
hf download interchart/Interchart --repo-type=dataset
🔍 Citation
If you use this dataset, please cite:
@article{iyengar2025interchart,
title={INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information},
author={Anirudh Iyengar Kaniyar Narayana Iyengar and Srija Mukhopadhyay and Adnan Qidwai and Shubhankar Singh and Dan Roth and Vivek Gupta},
journal={arXiv preprint arXiv:2508.07630},
year={2025}
}
🔗 Links
- 📘 Paper: arXiv:2508.07630v1
- 🌐 Website: https://coral-lab-asu.github.io/interchart/
- 🧠 Explore Dataset: Interactive Evaluation Portal