33 skills found · Page 1 of 2
google-ai-edge / LiteRTLiteRT, successor to TensorFlow Lite. is Google's On-device framework for high-performance ML & GenAI deployment on edge platforms, via efficient conversion, runtime, and optimization
google-ai-edge / LiteRT LMNo description available
google-ai-edge / Litert TorchSupport PyTorch model conversion with LiteRT.
PINTO0309 / Onnx2tfA tool for converting ONNX files to LiteRT/TFLite/TensorFlow, PyTorch native code (nn.Module), TorchScript (.pt), state_dict (.pt), Exported Program (.pt2), and Dynamo ONNX. It also supports direct conversion from LiteRT to PyTorch.
google / XrblocksXR Blocks is a lightweight WebXR + AI library for rapidly prototyping advanced AI + XR experiences.
google-ai-edge / Litert SamplesNo description available
google-ai-edge / AI Edge QuantizerAI Edge Quantizer: flexible post training quantization for LiteRT models.
jasonmayes / VectorSearch.jsClient side vector search using EmbeddingGemma with Web AI (LiteRT.js, TensorFlow.js, and Transformers.js)
NSTiwari / YOLOv10 LiteRT AndroidThis repository is an implementation of converting the YOLOv10 object detection model to LiteRT (.tflite) format and deploy it on Android using Google AI Edge for on-device inference.
kursor1337 / KTensorFlowKotlin Multiplatform Library for convinient use of LiteRT (TensorFlow Lite) models in common code
fghjhuang / LiteRTS一个android/ios 的实时对讲库,基于opus,speex,webrtc
stevan-milovanovic / LiteRT For AndroidImage Classification, Image Captioning and LLM inference with LiteRT
KegangWangCCNU / FacePhys ReleaseFacePhys Model Released. FacePhys is an rPPG algorithm utilizing State Space Models (SSM). Built on LiteRT, it is highly optimized for on-device CPU deployment.
mhss1 / Shadeon-device sensitive content blocker for Android. Works across any app powered by a custom-trained on-device AI model.
iFleey / PPOCRv5 AndroidReal-time OCR app for Android with PP-OCRv5 and LiteRT.
RunEdgeAI / CoreflowGraph-based C++ runtime for building and executing AI, ML, and computer vision pipelines across devices.
Mutesa-Cedric / React LitertA React library for running on-device AI with Google’s LiteRT runtime
UCSBarchlab / PyrtlnetA hardware implementation of quantized neural network inference in the PyRTL hardware description language.
IoT-gamer / Segment Anything Dinov3 OnnxA set of tools and examples for converting and utilizing powerful vision models, DINOv3 and EdgeTAM (SAM2), within the ONNX ecosystem.
SNU-RTOS / Minimal LitertComprehensive LiteRT example project with Bazel build system, demonstrating simple inference app, and profiling using XNNPACK and GPU delegates