| CARVIEW |
Visual
Sketchpad:
Sketching as a Visual Chain of Thought for Multimodal Language Models
Dan Roth3, Mari Ostendorf1, Luke Zettlemoyer1, Noah A. Smith*1,2, Ranjay Krishna*1,2
How does Sketchpad work?
Sketchpad equips GPT-4 with the ability to generate intermediate sketches to reason over tasks.
Given a visual input and query, such as proving the angles of a triangle equal 180°, Sketchpad enables the model to draw auxiliary lines which help solve the geometry problem. For computer vision problems, Sketchpad can use vision specialists to sketch and facilitate visual reasoning. For example, drawing a bounding box using Grounding DINO, or sketching a mask using Segment Anything.
Abstract
Humans draw to facilitate reasoning: we draw auxiliary lines when solving geometry problems; we mark and circle when reasoning on maps; we use sketches to amplify our ideas and relieve our limited-capacity working memory. However, such actions are missing in current multimodal language models (LMs). Current chain-of-thought and tool-use paradigms only use text as intermediate reasoning steps. In this work, we introduce sketchpad, a framework that gives multimodal LMs a visual sketchpad and tools to draw on the sketchpad. The LM conducts planning and reasoning according to the visual artifacts it has drawn. Different from prior work, which uses text-to-image models to enable LMs to draw, sketchpad enables LMs to draw with lines, boxes, marks, etc., which is closer to human sketching and better facilitates reasoning. sketchpad can also use specialist vision models during the sketching process (e.g., draw bounding boxes with object detection models, draw masks with segmentation models), to further enhance visual perception and reasoning. We experiment on a wide range of math tasks (including geometry, functions, graph, chess) and complex visual reasoning tasks. sketchpad substantially improves performance on all tasks over strong base models with no sketching, yielding an average gain of 12.7% on math tasks, and 8.6% on vision tasks. GPT-4o with sketchpad sets a new state of the art on all tasks, including V*Bench (80.3%), BLINK spatial reasoning (83.9%), and visual correspondence (80.8%).
More Examples
Effectiveness of Sketchpad
Related Work
- Visual Programming for Compositional Visual Reasoning
- ViperGPT: Visual Inference via Python Execution for Reasoning
- Visual Program Distillation: Distilling Tools and Programmatic Reasoning into Vision-Language Models
- ReAct: Synergizing Reasoning and Acting in Language Models
- AutoGen
- BLINK: Multimodal Large Language Models Can See but Not Perceive
- V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs
- Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs
- IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
- Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning
BibTeX
@article{hu2024visual,
title={Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models},
author={Hu, Yushi and Shi, Weijia and Fu, Xingyu and Roth, Dan and Ostendorf, Mari and Zettlemoyer, Luke and Smith, Noah A and Krishna, Ranjay},
journal={arXiv preprint arXiv:2406.09403},
year={2024}
}