MediaPipe is the simplest way for researchers and developers to build world-class ML solutions and applications for mobile, edge, cloud and the web.
layout: default title: Home
MediaPipe offers cross-platform, customizable ML solutions for live and streaming media.
|End-to-End acceleration: Built-in fast ML inference and processing accelerated even on common hardware||Build once, deploy anywhere: Unified solution works across Android, iOS, desktop/cloud, web and IoT|
|Ready-to-use solutions: Cutting-edge ML solutions demonstrating full power of the framework||Free and open source: Framework and solutions both under Apache 2.0, fully extensible and customizable|
|Face Mesh||Iris||Hands||Pose||Hair Segmentation|
|Box Tracking||Instant Motion Tracking||Objectron||KNIFT|
|Instant Motion Tracking||✅|
See also MediaPipe Models and Model Cards for ML models released in MediaPipe.
MediaPipe Python package is available on PyPI, and can be installed simply by
pip install mediapipeon Linux and macOS, as described in:
MediaPipe on the Web is an effort to run the same ML solutions built for mobile and desktop also in web browsers. The official API is under construction, but the core technology has been proven effective. Please see MediaPipe on the Web in Google Developers Blog for details.
You can use the following links to load a demo in the MediaPipe Visualizer, and over there click the "Runner" icon in the top bar like shown below. The demos use your webcam video as input, which is processed all locally in real-time and never leaves your device.
MediaPipe is currently in alpha at v0.7. We may be still making breaking API changes and expect to get to stable APIs by v1.0.
We welcome contributions. Please follow these guidelines.
We use GitHub issues for tracking requests and bugs. Please post questions to the MediaPipe Stack Overflow with a