We continuously release optimized models that deliver the world’s fastest performance for on-device inference.

Sign up