Performs high-speed ML inferencingThe on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power efficient manner. See more performance benchmarks. Works with Debian LinuxIntegrates with any Debian-based Linux system with a compatible card module slot. Supports TensorFlow LiteNo need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU.
Supports AutoML Vision EdgeEasily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge.