LiteRT is the brand new title for TensorFlow Lite (TFLite). Whereas the title is new, it is nonetheless the identical trusted, high-performance runtime for on-device AI, now with an expanded imaginative and prescient.
Since its debut in 2017, TFLite has enabled builders to deliver ML-powered experiences to over 100K apps working on 2.7B gadgets. Extra lately, TFLite has grown past its TensorFlow roots to assist fashions authored in PyTorch, JAX, and Keras with the identical main efficiency. The title LiteRT captures this multi-framework imaginative and prescient for the longer term: enabling builders to start out with any fashionable framework and run their mannequin on-device with distinctive efficiency.
LiteRT, a part of the Google AI Edge suite of instruments, is the runtime that permits you to seamlessly deploy ML and AI fashions on Android, iOS, and embedded gadgets. With AI Edge’s sturdy mannequin conversion and optimization instruments, you’ll be able to prepared each open-source and customized fashions for on-device improvement.
This transformation will roll out progressively. Beginning at this time, you’ll see the LiteRT title mirrored within the developer documentation, which is shifting to ai.google.dev/edge/litert, and in different references throughout the AI Edge web site. The documentation at tensorflow.org/lite now redirects to corresponding pages at ai.google.dev/edge/litert.
The principle TensorFlow model won’t be affected, nor will apps already utilizing TensorFlow Lite.
Learn how to entry LiteRT
Our aim is that this transformation is minimally disruptive, requiring as few code adjustments from builders as doable.
In case you presently use TensorFlow Lite by way of packages, you’ll must replace any dependencies to make use of the brand new LiteRT from Maven, PyPi, Cocoapods.
In case you presently use TensorFlow Lite by way of Google Play Providers, no change is important at the moment.
In case you presently construct TensorFlow Lite from supply, please proceed constructing from the TensorFlow repo till code has been totally moved to the brand new LiteRT repo later this 12 months.
Incessantly requested questions
1. What’s altering past the brand new title, LiteRT?
For now, the one change is the brand new title, LiteRT. Your manufacturing apps won’t be affected. With a brand new title and refreshed imaginative and prescient, look out for extra updates coming to LiteRT, bettering the way you deploy basic ML fashions, LLMs, and diffusion fashions with GPU and NPU acceleration throughout platforms.
2. What’s taking place to the TensorFlow Lite Support Library (together with TensorFlow Lite Duties)?
The TensorFlow Lite assist library and TensorFlow Lite Duties will stay within the /tensorflow repository at the moment. We encourage you to make use of MediaPipe Tasks for future improvement.
3. What’s taking place to TensorFlow Lite Mannequin Maker?
You may proceed to entry TFLite Mannequin Maker by way of https://pypi.org/project/tflite-model-maker/
4. What if I wish to contribute code?
For now, please contribute code to the present TensorFlow Lite repository. We’ll make a separate announcement after we’re prepared for contributions to the LiteRT repository.
5. What’s taking place to the .tflite file extension and file format?
No adjustments are being made to the .tflite file extension or format. Conversion instruments will proceed to output .tflite flatbuffer information, and .tflite information might be readable by LiteRT.
6. How do I convert fashions to .tflite format?
For Tensorflow, Keras and Jax you’ll be able to proceed to make use of the identical flows. For PyTorch assist try ai-edge-torch.
7. Will there be any adjustments to courses and strategies?
No. Except for package deal names, you received’t have to vary any code you’ve written for now.
8. Will there be any adjustments to TensorFlow.js?
No, TensorFlow.js will proceed to perform independently as a part of the Tensorflow codebase.
9. My manufacturing app makes use of TensorFlow Lite. Will or not it’s affected?
Apps which have already deployed TensorFlow Lite won’t be affected. This contains apps that entry TensorFlow Lite by way of Google Play Providers. (TFLite is compiled into the apps at construct time, so as soon as they’re deployed, apps haven’t any dependency.)
10. Why “LiteRT”?
“LiteRT” displays the legacy of TensorFlow Lite, a pioneering “lite”, on-device runtime, plus Google’s dedication to supporting at this time’s thriving multi-framework ecosystem.
11. Is TensorFlow Lite nonetheless being actively developed?
Sure, however underneath the title LiteRT. Energetic improvement will proceed on the runtime (now known as LiteRT), in addition to the conversion and optimization instruments. To make sure you’re utilizing probably the most up-to-date model of the runtime, please use LiteRT.
12. The place can I see examples of LiteRT in follow?
Yow will discover examples for Python, Android, and iOS within the official LiteRT samples repo.
We’re excited for the way forward for on-device ML, and are dedicated to our imaginative and prescient of constructing LiteRT the simplest to make use of, highest efficiency runtime for a variety of fashions.