You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
MegCC is a deep-learning model compiler with the following features:
Extremely Lightweight Runtime: Only keep the required computation kernel in your binary. e.g., 81KB runtime for MobileNet v1
High Performance: Every operation is carefully optimized by experts
Portable: generate nothing but computation code, easy to compile and use on Linux, Android, TEE, BareMetal
Low Memory Usage while Boot Instantly: Model optimization and memory planning are generated at compile time. Get State-of-the-art level memory usage and spend no extra CPU during inference
MegCC Structure
MegCC compiler is developed based on MLIR infrastructure. Most of the code generated by the compiler is optimized by hand. MegCC supports neural networks that contain tensors in static shape or dynamic shape.
To help achieve the minimum binary size, it also supports generating the necessary CV operators so that you don't need to link another giant CV lib.
When compiling a model:
MegCC generates both the kernels used by the model and user-required CV kernels
MegCC does several optimizations, such as static memory planning and model optimization
MegCC dumps the data above into the final model
MegCC runtime loads the model and uses the generated kernels to finish the model inference. Only 81KB binary size is required to inference MobileNetV1 (in fp32).
MegCC supports Arm64/ArmV7/X86/BareMatal backend. You may want to check supported operator lists.