| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Tue, 14 Oct 2025 16:48:27 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"68ee7edb-aa21"
expires: Wed, 31 Dec 2025 04:40:53 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 9627:1387E:AAA8F8:C02764:6954A6FC
accept-ranges: bytes
age: 0
date: Wed, 31 Dec 2025 04:30:53 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210063-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767155454.601838,VS0,VE228
vary: Accept-Encoding
x-fastly-request-id: 318cb0054708f7c1b7187a7f0c46779e5b4bf999
content-length: 8429
Home | ECLR Workshop at ICCV 2025
Cutting-edge visual computing techniques, especially those recognized for their large-scale capabilities,
have exhibited impressive achievements in computer vision area. However, the significant reliance on
vast amounts of data, label, and computational resources of these visual computing techniques present
challenges when deploying to real-world scenarios with limited resources. This workshop aims to bring
together experts in data-efficient, label-efficient, and computation-efficient visual computing fields,
establishing a collaborative platform to exchange recent breakthroughs and deliberate on the future
direction of visual computing models. By facilitating the exchange of ideas and insights, we aspire to
address the efficiency challenges inherent in visual computing and contribute to the evolution of its
practical applications in the real world.
This workshop centers on the academic exploration of efficient methodologies within the realm of visual computing. Our focus includes data-efficient strategies like image/video-compression, label- efficient strategies like zero/few-shot learning, and model-efficient approaches like model sparsification/quantization. By convening researchers specializing in these areas, we aim to facilitate the sharing of recent research findings and engage in discussions about the future trajectories of efficient visual computation. The importance of this workshop is emphasized by the current relevance of its theme, which has received considerable attention from researchers because of its direct relevance to practical uses. Through this platform, we aim to facilitate the sharing of ideas, creating a space for the expression of innovative perspectives aimed at tackling the efficiency issues faced by visual computing. This endeavor aims to contribute to the progress of real-world applications.
We invite submissions on any aspect of efficiency of visual computing,
includes but not limited to:
Peer review: Paper submissions must conform with the "double-blind" review policy. All papers will be peer-reviewed by experts in the field, they will receive at least two reviews from the program committee. Based on the reviewers' recommendations, accepted papers will be assigned either a contributed talk or a poster presentation.
Submission Site: https://openreview.net/group?id=thecvf.com/ICCV/2025/Workshop/ECLR
Submission Deadline: 07 Jun, 2025
Important: The accepted papers will be published in the proceeding, along with the ICCV main conference, indexed in EI Compendex.
Location: Room 327
Zoom Link: To be updated before 1:00 PM, 19 Oct.
2nd Workshop on Efficient Computing under Limited Resources: Visual Computing
Workshop at ICCV 2025
Overview
This workshop centers on the academic exploration of efficient methodologies within the realm of visual computing. Our focus includes data-efficient strategies like image/video-compression, label- efficient strategies like zero/few-shot learning, and model-efficient approaches like model sparsification/quantization. By convening researchers specializing in these areas, we aim to facilitate the sharing of recent research findings and engage in discussions about the future trajectories of efficient visual computation. The importance of this workshop is emphasized by the current relevance of its theme, which has received considerable attention from researchers because of its direct relevance to practical uses. Through this platform, we aim to facilitate the sharing of ideas, creating a space for the expression of innovative perspectives aimed at tackling the efficiency issues faced by visual computing. This endeavor aims to contribute to the progress of real-world applications.
Call for Papers
-
Data-efficient visual computing:
- Improving image/video compression
- Improving point cloud compression
- New method for multi-view image and video compression
- Lossless compression and entropy model
- Compression for human and machine vision
- New methods for in-context learning
- New methods for few-/zero-shot learning
- New methods for domain-adaptation methods
- New methods for training models under limited labels
- Benchmark for evaluating model generalization
- Network sparsity, quantization, distillation
- Efficient network architecture design
- Hardware implementation and on-device learning
- Brain-inspired computing methods
- Efficient training techniques
Peer review: Paper submissions must conform with the "double-blind" review policy. All papers will be peer-reviewed by experts in the field, they will receive at least two reviews from the program committee. Based on the reviewers' recommendations, accepted papers will be assigned either a contributed talk or a poster presentation.
Submission Site: https://openreview.net/group?id=thecvf.com/ICCV/2025/Workshop/ECLR
Submission Deadline: 07 Jun, 2025
Important: The accepted papers will be published in the proceeding, along with the ICCV main conference, indexed in EI Compendex.
Important Dates
| Event | Date |
|---|---|
| Paper submission deadline | 07 Jun, 2025 |
| Notification of acceptance | 21 Jun, 2025 |
| Camera-ready submission deadline | 28 Jun, 2025 |
| Workshop date | 19 Oct (Afternoon), 2025 |
Workshop Schedule
Location: Room 327
Zoom Link: To be updated before 1:00 PM, 19 Oct.
| Time | Event |
|---|---|
| 14:30-14:40 |
Opening Remark
|
| 14:40-15:10 |
Invited Talk 1: Given by Dr. Rogerio Schmidt Feris (MIT-IBM Watson AI Lab)
|
| 15:10-15:40 |
Coffee Break
|
| 15:40-15:55 |
Best Paper Presentation
Fine-tuning Large Models for Image Segmentation under Limited Resources: The SAM2-UNet Experience
Kuo Wang (Sun Yat-sen University)
|
| 15:55-16:05 |
Oral Presentation 1
SPoT: Subpixel Placement of Tokens in Vision Transformers
Martine Hjelkrem-Tan (University of Oslo)
|
| 16:05-16:15 |
Oral Presentation 2
Efficient Depth- and Spatially-Varying Image Simulation for Defocus Deblur
Fengchen He (Huazhong University of Science and Technology)
|
| 16:15-16:25 |
Oral Presentation 3
Tiny-vGamba: Distilling Large Vision-(Language) Knowledge from CLIP into a Lightweight vGamba Network
Yunusa Haruna (NewraLab Suzhou & Beihang University) (Online Presentation)
|
| 16:25-17:00 |
Poster Session
|
| 17:00-17:10 |
Closing Remark
|
Speakers (to be updated)
![]() |
Rogerio Schmidt Feris |
|
MIT-IBM Watson AI Lab |
Organizer
![]() |
Jinyang Guo |
Beihang University |
![]() |
Zhenghao Chen |
|
The University of Newcastle |
![]() |
Yuqing Ma |
|
Beihang University |
![]() |
Yifu Ding |
|
Beihang University & Nanyang Technological University |
![]() |
Xianglong Liu |
|
Beihang University |
![]() |
Jinman Kim |
|
The University of Sydney |
![]() |
Wanli Ouyang |
|
Shanghai AI Laboratory |
![]() |
Dacheng Tao |
|
Nanyang Technological University |
Publication Chairs
![]() |
Yejun Zeng |
Beihang University |
![]() |
Jiacheng Wang |
Beihang University |
Local Arrangement Chairs
![]() |
Yanan Zhu |
Beihang University |
Accepted Papers
🎉Accepted Long Paper
- Subpixel Placement of Tokens in Vision Transformers
- SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation
- Kernel-based Motion Free B-frame Coding for Neural Video Compression
- Efficient Depth-Varying Optical Simulation for Defocus Deblur
- Tiny-vGamba: Distilling Large Vision-(Language) Knowledge from CLIP into a Lightweight vGamba Network
- Low-bit FlashAttention Accelerated Operator Design Based on Triton
- Pruning by Block Benefit: Exploring the Properties of Vision Transformer Blocks during Domain Adaptation
- VCMamba: Bridging Convolutions with Multi-Directional Mamba for Efficient Visual Representation
- Leveraging Learned Image Prior for 3D Gaussian Compression
- Fisheye image augmentation for overcoming domain gaps with the limited dataset
- From Binary to Semantic: Utilizing Large-Scale Binary Occupancy Data for 3D Semantic Occupancy Prediction
- Relevance-Guided Activation Sparsification for Bandwidth-Efficient Collaborative Inference
- Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
- Adaptive Compression of Large Vision Models for Efficient Image Quality Assessment of AI-Generated Content
- FDAL: Leveraging Feature Distillation for Efficient and Task-Aware Active Learning
- From Coarse to Fine: Learnable Discrete Wavelet Transforms for Efficient 3D Gaussian Splatting
- Compressed Diffusion: Pruning with Knowledge Distillation for Efficient Text-to-Image Generation
🎉Accepted Short Paper
- Latent Representation of Microstructures using Variational Autoencoders with Spatial Statistics Space Loss
- L-GGSC: Learnable Graph-based Gaussian Splatting Compression
- Your Super Resolution Model is not Enough for tackling Real-World Scenarios
- Decay Pruning Method: Smooth Pruning With a Self-Rectifying Procedure
Previous Workshops
1st International Workshop on Efficient Multimedia Computing under Limited Resources @ ACM MM 2024











