62 skills found · Page 1 of 3
pytorch / TensorRTPyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
dotnet / TorchSharpA .NET library that provides access to the library that powers PyTorch.
VoltaML / VoltaML⚡VoltaML is a lightweight library to convert and run your ML/DL deep learning models in high performance inference runtimes like TensorRT, TorchScript, ONNX and TVM.
PINTO0309 / Onnx2tfA tool for converting ONNX files to LiteRT/TFLite/TensorFlow, PyTorch native code (nn.Module), TorchScript (.pt), state_dict (.pt), Exported Program (.pt2), and Dynamo ONNX. It also supports direct conversion from LiteRT to PyTorch.
zhiqwang / Yolortyolort is a runtime stack for yolov5 on specialized accelerators such as tensorrt, libtorch, onnxruntime, tvm and ncnn.
DeepVAC / DeepvacPyTorch Project Specification.
tomas-gajarsky / FacetorchPython library for analysing faces using PyTorch
yistLin / DvectorSpeaker embedding (d-vector) trained with GE2E loss
triton-inference-server / Pytorch BackendThe Triton backend for the PyTorch TorchScript models.
szymonmaszke / TorchlambdaLightweight tool to deploy PyTorch models to AWS Lambda
snakers4 / Russian Stt Text NormalizationRussian text normalization pipeline for speech-to-text and other applications based on tagging s2s networks
pytorch / Extension ScriptExample repository for custom C++/CUDA operators for TorchScript
louis-she / Torchscript DemosA brief of TorchScript by MNIST
masahi / Torchscript To TvmNo description available
yistLin / Universal VocoderA PyTorch implementation of the universal neural vocoder
xiezhq-hermann / GraphilerGraphiler is a compiler stack built on top of DGL and TorchScript which compiles GNNs defined using user-defined functions (UDFs) into efficient execution plans.
dnth / Timm Flutter Pytorch Lite BlogpostPyTorch at the Edge: Deploying Over 964 TIMM Models on Android with TorchScript and Flutter.
raghavmecheri / PytorchjsTorch and TorchVision, but for NodeJS.
IlyaOvodov / TorchScriptTutorialTorchScript tutorial (python, C++)
k9ele7en / Triton TensorRT Inference CRAFT PytorchAdvanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX