Performs high-speed
ML inferencing: The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power efficient manner. ^^ Works with
Debian Linux: Integrates with any Debian-based Linux system with a compatible card module slot.
^^ Supports TensorFlow Lite: No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU. ^^
Supports AutoML Vision Edge: Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge.