| CARVIEW |
ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding
Xingyu Fu1
Minqian Liu2
Zhengyuan Yang3
John Corring3
Yijuan Lu3
Jianwei Yang3
Dan Roth1
Dinei Florencio3
Cha Zhang3
1University of Pennsylvania
2Virginia Tech
3Microsoft
ICML 2025
Jianwei Yang3 Dan Roth1 Dinei Florencio3 Cha Zhang3
ICML 2025
Abstract
Structured image understanding, such as interpreting tables and charts, requires strategically refocusing across various structures and texts within an image, forming a reasoning sequence to arrive at the final answer. However, current multimodal large language models (LLMs) lack this multihop selective attention capability. In this work, we introduce ReFocus, a simple yet effective framework that equips multimodal LLMs with the ability to generate ``visual thoughts'' by performing visual editing on the input image through code, shifting and refining their visual focuses. Specifically, ReFocus enables multimodal LLMs to generate Python codes to call tools and modify the input image, sequentially drawing boxes, highlighting sections, and masking out areas, thereby enhancing the visual reasoning process. We experiment upon a wide range of structured image understanding tasks involving tables and charts. ReFocus largely improves performance on all tasks over GPT-4o without visual editing, yielding an average gain of 11.0% on table tasks and 6.8% on chart tasks. We present an in-depth analysis of the effects of different visual edits, and reasons why ReFocus can improve the performance without introducing additional information. Further, we collect a 14k training set using ReFocus, and prove that such visual chain-of-thought with intermediate information offers a better supervision than standard VQA pairs, reaching consistent gain over the same model trained with QA data.
More Examples
Notice that the left side images are original inputs, and right side ones are edited images by ReFocus.
Quantitative Results
ReFocus substantially improves performance on almost all tasks over GPT-4-turbo and GPT-4o without refocusing, setting a new state of the art on chart and table tasks.
Finetune with ReFocus
1. We collect a training dataset using ReFocus upon ChartQA, and release the data.
2. We finetune Phi-3.5-vision with the collected dataset and release the model.
3. Our finetuned model sets the new standard, outperforming the same base model finetuned with QA data or textual chain-of-thought (CoT) data.
Finetuned Model Output Examples
Related Work
- Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models
- Visual Programming for Compositional Visual Reasoning
- ViperGPT: Visual Inference via Python Execution for Reasoning
- ReAct: Synergizing Reasoning and Acting in Language Models
- Whiteboard-of-Thought: Thinking Step-by-Step Across Modalities
- TableVQA-Bench: A Visual Question Answering Benchmark on Multiple Table Domains
- ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning
- CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
- Phi-3.5-vision Model
- Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning
- Mind's Eye of LLMs: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models
BibTeX
@article{fu2025refocus, title={ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding}, author={Xingyu Fu and Minqian Liu and Zhengyuan Yang and John Corring and Yijuan Lu and Jianwei Yang and Dan Roth and Dinei Florencio and Cha Zhang}, journal={arXiv preprint arXiv:2501.05452}, year={2025} }