You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ExecuTorch is an end-to-end solution for on-device inference and training. It powers much of Meta's on-device AI experiences across Facebook, Instagram, Meta Quest, Ray-Ban Meta Smart Glasses, WhatsApp, and more.
It supports a wide range of models including LLMs (Large Language Models), CV (Computer Vision), ASR (Automatic Speech Recognition), and TTS (Text to Speech).
Platform Support:
Operating Systems:
iOS
MacOS (ARM64)
Android
Linux
Microcontrollers
Hardware Acceleration:
Apple
Arm
Cadence
MediaTek
OpenVINO
Qualcomm
Vulkan
XNNPACK
Key value propositions of ExecuTorch are:
Portability: Compatibility with a wide variety of computing platforms,
from high-end mobile phones to highly constrained embedded systems and
microcontrollers.
Productivity: Enabling developers to use the same toolchains and Developer
Tools from PyTorch model authoring and conversion, to debugging and deployment
to a wide variety of platforms.
Performance: Providing end users with a seamless and high-performance
experience due to a lightweight runtime and utilizing full hardware
capabilities such as CPUs, NPUs, and DSPs.
Getting Started
To get started you can:
Visit the Step by Step Tutorial to get things running locally and deploy a model to a device
Use this Colab Notebook to start playing around right away
Jump straight into LLM use cases by following specific instructions for popular open-source models such as Llama, Qwen 3, Phi-4-mini, and Llava
Feedback and Engagement
We welcome any feedback, suggestions, and bug reports from the community to help
us improve our technology. Check out the Discussion Board or chat real time with us on Discord
Contributing
We welcome contributions. To get started review the guidelines and chat with us on Discord