- [2025.12.04] ChartMimic has been adopted by Qwen3-VL as one of the benchmarks for Multi-Modal Coding. Please check the paper for more details.
- [2025.06.18] π§ ChartMimic has been integrated into VLMEvalKit. Welcome to use ChartMimic through VLMEvalKit! Special thanks to the VLMEvalKit team.
- [2025.02.01] π₯³ ChartMimic is accepted by ICLR 2025.
- [2024.06.13] π£ ChartMimic is released.
ChartMimic aims at assessing the visually-grounded code generation capabilities of large multimodal models (LMMs). ChartMimic utilizes information-intensive visual charts and textual instructions as inputs, requiring LMMs to generate the corresponding code for chart rendering.
ChartMimic includes 4,800 human-curated (figure, instruction, code) triplets, which represent the authentic chart use cases found in scientific papers across various domains(e.g., Physics, Computer Science, Economics, etc). These charts span 18 regular types and 4 advanced types, diversifying into 201 subcategories. Furthermore, we propose multi-level evaluation metrics to provide an automatic and thorough assessment of the output code and the rendered charts. Unlike existing code generation benchmarks, ChartMimic places emphasis on evaluating LMMs' capacity to harmonize a blend of cognitive capabilities, encompassing visual understanding, code generation, and cross-modal reasoning.
Click to expand the table of contents
Here we provide a quick start guide to evaluate LMMs on ChartMimic.
Kind Note: ChartMimic has been integrated into VLMEvalKit. Welcome to use ChartMimic through VLMEvalKit!
conda env create -f environment.yaml
conda activate chartmimicSet up the environment variables in .env file.
PROJECT_PATH=${YOUR_PROJECT_PATH}
OPENAI_BASE_URL=${YOUR_OPEN_AI_BASE_URL}
OPENAI_API_KEY=${YOUR_OPENAI_API_KEY}
ANTHROPIC_API_KEY=${YOUR_ANTHROPIC_API_KEY}
GOOGLE_API_KEY=${YOUR_ANTHROPIC_API_KEY}You can download the whole evaluation data by running the following command:
cd ChartMimic # cd to the root directory of this repository
mkdir dataset
wget https://huggingface.co/datasets/ChartMimic/ChartMimic/resolve/main/dataset-iclr.tar.gz
tar -xzvf dataset-iclr.tar.gz -C datasetExample script for gpt-4-vision-preview on the Direct Mimic task:
export PROJECT_PATH=${YOUR_PROJECT_PATH}
# Step 1: Get Model Reponse
bash scripts/direct_mimic/run_generation.sh
# Step 2: Run the Code in the Response
bash scripts/direct_mimic/run_code.sh
# Step 3: Get Lowlevel Score
bash scripts/direct_mimic/run_evaluation_lowlevel.sh
# Step 4: Get Highlevel Score
bash scripts/direct_mimic/run_evaluation_highlevel.shExample script for gpt-4-vision-preview on the Customized Mimic task:
export PROJECT_PATH=${YOUR_PROJECT_PATH}
# Step 1: Get Model Reponse
bash scripts/customized_mimic/run_generation.sh
# Step 2: Run the Code in the Response
bash scripts/customized_mimic/run_code.sh
# Step 3: Get Lowlevel Score
bash scripts/customized_mimic/run_evaluation_lowlevel.sh
# Step 4: Get Highlevel Score
bash scripts/customized_mimic/run_evaluation_highlevel.shWe now offer configuration for 14 SOTA LMM models (gpt-4-vision-preview, claude-3-opus-20240229, gemini-pro-vision, Phi-3-vision-128k-instruct,MiniCPM-Llama3-V-2_5,InternVL-Chat-V1-5, cogvlm2-llama3-chat-19B,deepseekvl,llava-v1.6-mistral-7b-hf,llava-v1.6-34b-hf, idefics2-8b, llava-v1.6-vicuna-13b-hf,llava-v1.6-vicuna-7b-hf and qwenvl).
You can download the whole evaluation data by running the following command:
cd ChartMimic # cd to the root directory of this repository
mkdir dataset
wget https://huggingface.co/datasets/ChartMimic/ChartMimic/resolve/main/dataset-iclr.tar.gz
tar -xzvf dataset-iclr.tar.gz -C datasetTo help researchers quickly understand evaluation data, we provide Dataset Viewer at Huggingface Dataset: π€ ChartMimic.
The file structure of evaluation data is as follows:
.
βββ customized_1800/
βββ customized_600/
βββ direct_1800/
βββ direct_600/
dimentions_info.jsonl and dimentions_info_edit.jsonl under the legacy folder.
cd ChartMimic # cd to the root directory of this repository
mkdir dataset
wget https://huggingface.co/datasets/ChartMimic/ChartMimic/resolve/main/dataset-old.tar.gz
tar -xzvf dataset-old.tar.gz -C datasetThe file structure of evaluation data is as follows:
.
βββ customized_500/ # Data for Customized Mimic
βββ ori_500/ # Data for Direct Mimic
βββ test.jsonl # Data for both tasks
If you find this repository useful, please consider giving star and citing our paper:
@article{yang2024chartmimic,
title={Chartmimic: Evaluating lmm's cross-modal reasoning capability via chart-to-code generation},
author={Yang, Cheng and Shi, Chufan and Liu, Yaxin and Shui, Bo and Wang, Junjie and Jing, Mohan and Xu, Linran and Zhu, Xinyu and Li, Siheng and Zhang, Yuxiang and others},
journal={arXiv preprint arXiv:2406.09961},
year={2024}
}
The ChartMimic data and codebase is licensed under a Apache-2.0 License.
We would like to express our gratitude to agentboard for their project codebase.

