This sample app classifies images. The top 3 predicted Imagenet classes are displayed.
The app aims to showcase best practices for using TF Lite for model inference on Android devices.
- Clone this repository with Git-LFS enabled.
- Download Android Studio. Version 2023.1.1 or newer is required.
- Enable USB debugging on your Android device.
- Download or export a compatible model from AI Hub Models.
- Copy the
.tflite
file tosrc/main/assets/<your_model>.tflite
- In ../gradle.properties, modify the value of
classification_tfLiteModelAsset
to the name of your model file (<your_model>.tflite
) - Open the PARENT folder (
android
) (NOT THIS FOLDER) in Android Studio, run gradle sync, and build theImageClassification
target.
By default, this app supports the following hardware:
Comments have been left in TFLiteHelpers.java and AIHubDefaults.java to guide you on how to add support for additional TF Lite delegates that could target other hardware.
- TensorFlow Lite (.tflite)
INPUT | Description | Shape | Data Type |
---|---|---|---|
Image | An RGB image | [1, Height, Width, 3] | float32 or uint8 (zero point of 0, scale of 1 / 255) |
OUTPUT | Description | Shape | Data Type |
---|---|---|---|
Classes | Imagenet Classes | [ 1, 1000 ] | float32, uint8, or int8 (any quantization params) |
Compatible AI Hub Models
The below is a non-exhaustive list of AI Hub Models that should be compatible.
Not every model has been individually tested with this app. Please file an issue or reach out on Slack if you find a model in this list with app compatibility issues.
- ConvNext-Tiny
- ConvNext-Tiny-w8a8-Quantized
- DenseNet-121
- EfficientNet-B0
- GoogLeNet
- GoogLeNetQuantized
- Inception-v3
- Inception-v3-Quantized
- MNASNet05
- MobileNet-v2
- MobileNet-v2-Quantized
- MobileNet-v3-Large
- MobileNet-v3-Large-Quantized
- MobileNet-v3-Small
- RegNet
- RegNetQuantized
- ResNet101
- ResNet101Quantized
- ResNet18
- ResNet18Quantized
- ResNet50
- ResNet50Quantized
- ResNeXt101
- ResNeXt101Quantized
- ResNeXt50
- ResNeXt50Quantized
- ShuffleNet-v2
- ShuffleNet-v2Quantized
- SqueezeNet-1_1
- SqueezeNet-1_1Quantized
- Swin-Base
- Swin-Small
- Swin-Tiny
- VIT
- WideResNet50
- WideResNet50-Quantized
Each AI Hub profile or inference job, once completed, will contain a Runtime Configuration
section.
Modify TFLiteHelpers.java according to the runtime configuration applied to the job. Comment stubs are included to help guide you (search for TO REPLICATE AN AI HUB JOB...
)
Note that if your job uses delegates other than QNN NPU, GPUv2, and TFLite, then you will also need to add support for those delegates to the app.
- Android SDK
- TensorFlow Lite
- TF Lite Support Library
- QNN SDK (TF Lite Delegate)
- GPUv2 Delegate
- XNNPack Delegate
This app is released under the BSD-3 License found at the root of this repository.
All models from AI Hub Models are released under separate license(s). Refer to the AI Hub Models repository for details on each model.
The QNN SDK dependency is also released under a separate license. Please refer to the LICENSE file downloaded with the SDK for details.