| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Sun, 19 Oct 2025 17:04:14 GMT
access-control-allow-origin: *
etag: W/"68f51a0e-37f4"
expires: Mon, 29 Dec 2025 06:50:28 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 569B:272D88:8720FB:97C35B:6952225C
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 06:40:29 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210060-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766990429.859749,VS0,VE230
vary: Accept-Encoding
x-fastly-request-id: 3edc010e8067a68d82476970882bf36f764d27ad
content-length: 3733
Task2Sim: Towards Effective Pre-training and Transfer from Synthetic Data
Task2Sim: Towards Effective Pre-training
and Transfer from Synthetic Data
Pre-training models on Imagenet or other massive datasets of real images has led to major advances in computer vision, albeit accompanied with shortcomings related to curation cost, privacy, usage rights, and ethical issues. In this paper, for the first time, we study the transferability of pre-trained models based on synthetic data generated by graphics simulators to downstream tasks from very different domains. In using such synthetic data for pre-training, we find that downstream performance on different tasks are favored by different configurations of simulation parameters (e.g. lighting, object pose, backgrounds, etc.), and that there is no one-size-fits-all solution. It is thus better to tailor synthetic pre-training data to a specific downstream task, for best performance. We introduce Task2Sim, a unified model mapping downstream task representations to optimal simulation parameters to generate synthetic pre-training data for them. Task2Sim learns this mapping by training to find the set of best parameters on a set of "seen" tasks. Once trained, it can then be used to predict best simulation parameters for novel "unseen" tasks in one shot, without requiring additional training. Given a budget in number of images per class, our extensive experiments with 20 diverse downstream tasks show Task2Sim's task-adaptive pre-training data results in significantly better downstream performance than non-adaptively choosing simulation parameters on both seen and unseen tasks. It is even competitive with pre-training on real images from Imagenet.
and Transfer from Synthetic Data
|
|
|
|
|
|
|
|
|
|
|
|
![]() |
Abstract
Simulation Controls
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Different properties of synthetic data parameterized in simulation
Paper & Code
|
Samarth Mishra, Rameswar Panda, Cheng Perng Phoo, Chun-Fu Chen, Leonid Karlinsky, Kate Saenko, Venkatesh Saligrama, Rogerio Feris Task2Sim: Towards Effective Pre-training and Transfer from Synthetic Data [Arxiv] [Code] [Poster] [Video] |








