Prof. Eyal Ofek

Chair of Computer Science

(HCI, Mixed Reality, Computer Vision)

Work e-mail: e.ofek@bham.ac.uk

Personal e-mail:  eyal.ofek@gmail.com

LinkedIn, Facebook, YouTube ,

Google Scholar, Curriculum vitae (academic),

A complete list of publications

I am affiliated with the University of Birmingham’s Institute of Data and AI. (IDAI)

I am affiliated with the University of Birmingham’s Institute of Robotics.

I am a member of the University of Birmingham VRlab.

I am a member of BHamXR.

I am always looking for creative, passionate about technology, and self-motivated Ph.D. students (postdocs, Master’s students, and visiting researchers are also welcome!).

If you are interested in working with me, please email me.

Please apply through the University application system here and include my name in your application.

About me

I am a Chair of Computer Science at the School of Computer Science, University of Birmingham, UK

My research leverages AI and the Augmentation of users’ senses to overcome the physical limitations of the real world, fostering a more inclusive workplace by bridging differences in location, abilities, resources, and customs, enabling better collaboration.

I am a member of the Perception, Language, and Action Theme at the University of Birmingham Virtual Reality Lab. Additionally, I am an affiliate of the University of Birmingham’s Institute for Data and AI (IDAI).

Pioneering Technology

As an interdisciplinary scholar, entrepreneur, and practitioner, I had the pleasure of introducing several new technologies worldwide, including the world’s first time-of-flight colour and depth video camera (ZCam), whose technology was used for Microsoft HoloLens, Microsoft Kinect, Magic Leap headsets, and more. I developed the first streetside service, combining maps with ground-level street coverage, which inspired Google’s Street View and Microsoft’s StreetSide services. I have created a new feature that is used to detect text in natural images (Stroke Width Transform), used by many OCR and mobile services, and was included in the OpenCV library. I have developed a novel Augmented Reality experience for automatic fitting in changing environments (FLARE). Its technology was used in Microsoft HoloLens and Unity MARS. I have developed a novel robotic cane to guide blind individuals, technology to map and fight wildfires using autonomous drones, and much more (See some more here).

Pioneering Research

I am a senior member of the Association for Computing Machinery.

I am an IEEE Distinguished Contributor.

I have founded several research groups, including the Microsoft Bing Maps and Microsoft Research Augmented Reality groups. I have published over 110 papers in leading forums such as Science Robotics, Nature Communications, Computer Vision & Pattern Recognition, SIGGRAPH, CHI, and more, with more than 10 papers receiving Best Paper or Best Papers Honourable Mention, and have been cited about 19,000 times. I have been granted more than 100 patents in computer vision, graphics, location-based services, and AI. I am frequently an AC on major conferences and was the paper chair for ACM SIGGIS 2011.

Past Experience

2023 Data Blanket. Leading the vision/AI development of Real-time AI-Driven Fire Fighting

Wildfires result in the loss of thousands of lives, millions of acres, hundreds of billions of dollars in damage, and over 5% of global emissions every year. We build a system to empower firefighters with new tactical tools and information.

2011-2023 Principal Researcher & Research Manager, Microsoft Research

My research focused on Human-Computer Interaction (HCI), sensing, and Mixed-Reality displays (MR) to enable users to reach their full potential in productivity, creativity, and collaboration. I have been granted more than 110 patents and published over 90 academic papers (with more than 14000 citations), and I was awarded ACM Senior Member status.

In addition to publications and the transfer of technology to products, I have released multiple tools and open-source libraries, such as the RoomAlive Toolkit, used around the world for multi-projection systems, SeeingVR to enhance the use of VR for people with low vision, Microsoft Rocketbox avatars, MoveBox and HeadBox toolkits to democratize avatar animation, and RemoteLab for distributed user studies.

Academic service: I served on multiple conference committees (CHI, UIST, CVPR, and more ), as the paper chair of ACM SIGSPATIAL 2011, as the Specialty Chief Editor of Frontiers in Virtual Reality for the area of Haptics, and on the editorial board of IEEE Computer Graphics and Applications Journal (CG&A). 

I have formed and led a new MR research group at Microsoft Research’s Extreme Computing lab. I envision MR applications woven into the fabric of our lives, rather than PC and mobile apps, which are limited to a specific device’s screen. Such applications must be smart enough to understand users’ changing physical and social contexts and flexible enough to adapt accordingly. We developed systems such as FLARE (Fast Layout for AR experiences), which was used by the HoloLens Team and inspired the Unity MARS product, and Triton 3D audio simulation, used by Microsoft Games such as Gears of War 4, and the basis for Microsoft Acoustics. ILLUMIROOM, a collaboration with the Redmond Lab, was presented at the CES 2013 Keynote.

2005-2011 Research Manager, Bing Maps & Mobile Research Lab, Microsoft

I founded the Bing Maps & Mobile Research Lab, where we combined world-class computer vision and graphics research with product impact. Among our results are the development of influential text-detection technology used by the Bing Mobile app and incorporated into OpenCV; the world’s first street-side imagery service and street-level reconstruction of geometry and texture pipeline; novel texture compression used by Virtual Earth 3D; and more.

In addition to publishing in leading computer vision and graphics forums, our work was presented at TED 2010 and featured in the New York Times.

Automatic geopositioning of Flickr’s images
3D reconstruction of streets

1996-2001 CTO (Software)

I oversaw the R&D (software and algorithms) of the world’s first time-of-flight video camera in a start-up company. I used cameras for applications such as TV depth keying and reconstruction, and they served as the basis for the Depth cameras used by Microsoft HoloLens and Magic Leap HMD.

Early real-time color and depth TV
Usage of ZCam in live broadcast – KPIX CA
Demo of ZCam – Alias Wavefront Research

1985-1986 BazbosoftFounder

Development of the award-winning and popular Amiga photo-editing and drawing editor (Photon-Paint).

For more information, please see my LinkedIn profile.                  

Selected Talks

AWE 2013: Gesture and Interactive Technologies
Behind the Scenes with Microsoft:
VR in the wild
Haptics in AR and VR – Frontiers in VR
Using Virtual Reality To Help People With Disabilities – NPR
Inside AR and VR – Microsoft Research Blog
Future of Haptics in VR

News

  • Dec ’25: PC Member UIST 2026
  • Dec ’25: Our paper has been accepted to IEEE VR. Details: TBD.
  • Dec ’25: I am one of the chairs of EuroHaptics 2025 (Siena, Italy) https://lnkd.in/gHTevsCJ
    There is an open call for event sponsorship: an opportunity to secure unparalleled visibility among global haptics leaders, contribute to cutting-edge research, and showcase your innovations.
  • Dec ’25: I am excited to announce our CHI 2026 workshop: “Augmented Body Parts: Bridging VR Embodiment and Wearable Robotics.” (https://lnkd.in/gRi2QPDZ)
    This workshop brings together researchers interested in embodied interaction, virtual embodiment, wearable robotics, haptics, multimodal sensing, and human augmentation.
    Our goal is to foster exploratory ideation on diverse HCI approaches to extending the human body, understand how people adapt to augmented body parts, and develop sim-to-real research methods that bridge VR-based simulations and wearable robotic systems.
    Organizing team: Myung Jin Kim, Seungwoo Je, Seungjae Oh, Shuto Takashita, Hongyu Zhou, Marie Muehlhaus, Eyal Ofek, and @Andrea Bianchi.
    The EnSeption Team supports this workshop at the Electronics and Telecommunications Research Institute (ETRI).
  • Dec ’25: I am honored as a Distinguished Contributor of the IEEE.
  • Nov ’25 – PC member of CHI 2026
  • Oct ’25: I presented the paper ‘ReachVox: Clutter-free Reachability Visualization for Robot Motion Planning in Virtual Reality‘ at ISMAR 2025, Daegeon, Korea.
  • Oct ’25 – I will deliver a keynote at the 2nd International xrWORKS’25 workshop at ISMAR 2025.
  • Oct ’25 – Mr. Yilong Lin has joined my lab as a Ph.D. student.

Research Interests

Adaptive Mixed Reality (MR) & AI

I see Mixed reality as a revolution beyond display technology. Unlike traditional software, which is developed, tested, and used on standard devices, MR applications utilize the user environment as their platform. This requires such applications to be aware of the user’s unique context, physical environment, social interactions, and other applications. The rise of Machine Learning and large language models presents an exciting opportunity to integrate both localized and global knowledge into this process.

I explore innovative approaches to designing and implementing such applications, examining their impact on our work and social interactions.

Sensing, Computer Vision, and Privacy

Another aspect of the technology revolution is the proliferation of sensors, which enable applications to fit users’ context and intent better. Sensors can enable better interaction with devices within a holistic digital environment around the user, all focused on the user’s tasks and bridging the digital and physical worlds.

I see significant importance in designing sensing capabilities that enable new capabilities while maintaining user privacy. New sensors enable us to plan which parts of the space to measure and use the minimal data granularity needed for the task.

Accessibility and inclusion of MR

Today, we can augment our senses to be in a digital space, independent of the physical laws that constrain us in Reality. This enables people to accomplish more than they could in reality. It also allows leveling the plane field between users with different physical, social, or environmental limitations.

More…

Haptics

Our experience of the real world is not limited to vision and audio. The limited rendering of touch sensation reduces MR’s realism today, and the effectiveness of using our hands when working in the space.

I have done extensive research on rendering haptics with other senses, designing novel hand-held haptic controllers that advanced the state-of-the-art of active haptic rendering and using scene understanding and manipulation of hand-eye -coordination effects of using the physical environment around the user for haptic rendering.

More…

Avatars

In Virtual Reality and spatial computing simulations, avatars often represent humans.

I was working on issues such as creating avatars, controlling avatars using natural motions, and decoupling avatars’ motions from users’ motions to improve accessibility, productivity, and user perception.

Academic Service

  • EuroHaptics 2026: Member of the conference Board
  • Organizer: CHI 2026 workshop: “Augmented Body Parts: Bridging VR Embodiment and Wearable Robotics.” (https://lnkd.in/gRi2QPDZ)
  • Frontiers in Virtual Reality Specialty Chief Editor – Haptics (’20-’22)
  • IEEE Computer Graphics & Applications (CG&A) Member of the Editorial Board
  • ACM SIGSPATIAL 2011 Conference Paper Chair
  • ACM CHI ’22, ’24, ’25 ’26 PC Member
  • VRST ’23 PC Member
  • ISMAR ’23, ’24, ’25 PC Member
  • ACM UIST ’23. ’26 PC Member
  • ACM SIGSPATIAL PC Member
  • IEEE Computer Vision & Pattern Recognition (CVPR) PC Member
  • Pacific Graphics PC Member
  • ACM International Conference on Interactive Surfaces and Spaces (ISS) PC Member
  • ACM Multimedia Conference (MMSYS) PC Member
  • Microsoft Research Ph.D. Fellowship area chair
  • Microsoft Research Ada Lovelace Fellowship area chair
  • A visiting Professor. The School of Computer Science, Interdisciplinary Center, Herzliya, Israel 2002

Awards

  • IEEE Distinguished Contributer, 2025
  • Best Paper: Honorable Mention, CHI 2024
  • Best Paper, DIS 2023
  • Best Paper: Honorable Mention Paper, CHI 2023
  • Senior Member of the ACM 2022
  • Best Paper, DIS 2021
  • Best Paper: Honorable Mention Paper, CHI 2020
  • Best Paper: Honorable Mention Paper, IEEE VR 2020
  • Best Paper: Honorable Mentioned Demo, UIST 2019
  • Best Paper, ISMAR 2019
  • Best Paper: Honorable Mentioned Paper, CHI 2018
  • Golden Mouse Award – Best Video Showcase, CHI 2016
  • Best paper, CSCW 2016
  • Golden Mouse Award – Best Video Showcase, CHI 2013
  • Best paper, CHI 2013
  • Best paper, UIST 2009
  • Microsoft Star Developer, Microsoft Bing Maps 2006
  • Charles Clor Scholarship,  1992

120+ Granted patents

2025
  • Automatic Generation of Markers Based On Social Interaction, Issued Apr 1, US 12265580 B2
  • Haptic controller, Issued Feb 11, US 12223109 B2
2024
  • Representing Two Dimensional Representations As Three-dimensional Avatars, Issued Dec. 24, US 12175581 B2
  • Mobile Haptic Robots, Issued Dec. 10, US 12161934 B2
  • Presenting Augmented Reality Display Data In Physical Presentation Environments., Issued Nov. 12, US 12141927 B2
2023
  • Headset Virtual Presence, Issued Oct. 17, US 11792364 B2
  • Intuitive Augmented Reality Collaboration On Visual Data, Issued Oct. 10, US 11782669 B2
  • Computing Device Headset Input., Issued Jun. 6, US 11669294 B2
  • Multilayer Controller. Issued Jan. 17, US 11556168 B2
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009

Invited talks

Media Coverage

Teaching

  • Visualization 2024/2025
  • Robot Vision (Partial) 2024/2025