SkillAgentSearch skills...

Onnx2tf

A tool for converting ONNX files to LiteRT/TFLite/TensorFlow, PyTorch native code (nn.Module), TorchScript (.pt), state_dict (.pt), Exported Program (.pt2), and Dynamo ONNX. It also supports direct conversion from LiteRT to PyTorch.

Install / Use

/learn @PINTO0309/Onnx2tf

README

onnx2tf

A tool for converting ONNX files to LiteRT/TFLite/TensorFlow, PyTorch native code (nn.Module), TorchScript (.pt), state_dict (.pt), Exported Program (.pt2), and Dynamo ONNX. It also supports direct conversion from LiteRT to PyTorch.

You should use LiteRT Torch rather than onnx2tf. https://github.com/google-ai-edge/litert-torch and https://github.com/google-ai-edge/ai-edge-quantizer

<p align="center"> <img src="https://user-images.githubusercontent.com/33194443/193840307-fa69eace-05a9-4d93-9c5d-999cf88af28e.png" /> </p>

Downloads GitHub Python PyPI CodeQL Model Convert Test Status DOI Ask DeepWiki

Model Conversion Status

https://github.com/PINTO0309/onnx2tf/wiki/model_status

Supported layers

  • https://github.com/onnx/onnx/blob/main/docs/Operators.md

  • :heavy_check_mark:: Supported :white_check_mark:: Partial support Help wanted: Pull Request are welcome

    <details><summary>See the list of supported layers</summary><div>

    |OP|Status| |:-|:-:| |Abs|:heavy_check_mark:| |Acosh|:heavy_check_mark:| |Acos|:heavy_check_mark:| |Add|:heavy_check_mark:| |AffineGrid|:heavy_check_mark:| |And|:heavy_check_mark:| |ArgMax|:heavy_check_mark:| |ArgMin|:heavy_check_mark:| |Asinh|:heavy_check_mark:| |Asin|:heavy_check_mark:| |Atanh|:heavy_check_mark:| |Atan|:heavy_check_mark:| |Attention|:heavy_check_mark:| |AveragePool|:heavy_check_mark:| |BatchNormalization|:heavy_check_mark:| |Bernoulli|:heavy_check_mark:| |BitShift|:heavy_check_mark:| |BitwiseAnd|:heavy_check_mark:| |BitwiseNot|:heavy_check_mark:| |BitwiseOr|:heavy_check_mark:| |BitwiseXor|:heavy_check_mark:| |BlackmanWindow|:heavy_check_mark:| |Cast|:heavy_check_mark:| |Ceil|:heavy_check_mark:| |Celu|:heavy_check_mark:| |CenterCropPad|:heavy_check_mark:| |Clip|:heavy_check_mark:| |Col2Im|:white_check_mark:| |Compress|:heavy_check_mark:| |ConcatFromSequence|:heavy_check_mark:| |Concat|:heavy_check_mark:| |ConstantOfShape|:heavy_check_mark:| |Constant|:heavy_check_mark:| |Conv|:heavy_check_mark:| |ConvInteger|:white_check_mark:| |ConvTranspose|:heavy_check_mark:| |Cosh|:heavy_check_mark:| |Cos|:heavy_check_mark:| |CumProd|:heavy_check_mark:| |CumSum|:heavy_check_mark:| |DeformConv|:white_check_mark:| |DepthToSpace|:heavy_check_mark:| |Det|:heavy_check_mark:| |DequantizeLinear|:heavy_check_mark:| |DFT|:white_check_mark:| |Div|:heavy_check_mark:| |Dropout|:heavy_check_mark:| |DynamicQuantizeLinear|:heavy_check_mark:| |Einsum|:heavy_check_mark:| |Elu|:heavy_check_mark:| |Equal|:heavy_check_mark:| |Erf|:heavy_check_mark:| |Expand|:heavy_check_mark:| |Exp|:heavy_check_mark:| |EyeLike|:heavy_check_mark:| |Flatten|:heavy_check_mark:| |Floor|:heavy_check_mark:| |FusedConv|:heavy_check_mark:| |GatherElements|:heavy_check_mark:| |GatherND|:heavy_check_mark:| |Gather|:heavy_check_mark:| |Gelu|:heavy_check_mark:| |Gemm|:heavy_check_mark:| |GlobalAveragePool|:heavy_check_mark:| |GlobalLpPool|:heavy_check_mark:| |GlobalMaxPool|:heavy_check_mark:| |GreaterOrEqual|:heavy_check_mark:| |Greater|:heavy_check_mark:| |GridSample|:white_check_mark:| |GroupNormalization|:heavy_check_mark:| |GRU|:heavy_check_mark:| |HammingWindow|:white_check_mark:| |HannWindow|:white_check_mark:| |Hardmax|:heavy_check_mark:| |HardSigmoid|:heavy_check_mark:| |HardSwish|:heavy_check_mark:| |Identity|:heavy_check_mark:| |If|:heavy_check_mark:| |ImageDecoder|:white_check_mark:| |Input|:heavy_check_mark:| |InstanceNormalization|:heavy_check_mark:| |Inverse|:heavy_check_mark:| |IsInf|:heavy_check_mark:| |IsNaN|:heavy_check_mark:| |LayerNormalization|:heavy_check_mark:| |LeakyRelu|:heavy_check_mark:| |LessOrEqual|:heavy_check_mark:| |Less|:heavy_check_mark:| |Log|:heavy_check_mark:| |LogSoftmax|:heavy_check_mark:| |Loop|:heavy_check_mark:| |LpNormalization|:heavy_check_mark:| |LpPool|:heavy_check_mark:| |LRN|:heavy_check_mark:| |LSTM|:heavy_check_mark:| |MatMul|:heavy_check_mark:| |MatMulInteger|:heavy_check_mark:| |MaxPool|:heavy_check_mark:| |Max|:heavy_check_mark:| |MaxRoiPool|:heavy_check_mark:| |MaxUnpool|:heavy_check_mark:| |Mean|:heavy_check_mark:| |MeanVarianceNormalization|:heavy_check_mark:| |MelWeightMatrix|:heavy_check_mark:| |Min|:heavy_check_mark:| |Mish|:heavy_check_mark:| |Mod|:heavy_check_mark:| |Mul|:heavy_check_mark:| |Multinomial|:heavy_check_mark:| |Neg|:heavy_check_mark:| |NegativeLogLikelihoodLoss|:heavy_check_mark:| |NonMaxSuppression|:heavy_check_mark:| |NonZero|:heavy_check_mark:| |Optional|:heavy_check_mark:| |OptionalGetElement|:heavy_check_mark:| |OptionalHasElement|:heavy_check_mark:| |Not|:heavy_check_mark:| |OneHot|:heavy_check_mark:| |Or|:heavy_check_mark:| |Pad|:heavy_check_mark:| |Pow|:heavy_check_mark:| |PRelu|:heavy_check_mark:| |QLinearAdd|:heavy_check_mark:| |QLinearAveragePool|:heavy_check_mark:| |QLinearConcat|:heavy_check_mark:| |QLinearConv|:heavy_check_mark:| |QGemm|:heavy_check_mark:| |QLinearGlobalAveragePool|:heavy_check_mark:| |QLinearLeakyRelu|:heavy_check_mark:| |QLinearMatMul|:heavy_check_mark:| |QLinearMul|:heavy_check_mark:| |QLinearSigmoid|:heavy_check_mark:| |QLinearSoftmax|:heavy_check_mark:| |QuantizeLinear|:heavy_check_mark:| |RandomNormalLike|:heavy_check_mark:| |RandomNormal|:heavy_check_mark:| |RandomUniformLike|:heavy_check_mark:| |RandomUniform|:heavy_check_mark:| |Range|:heavy_check_mark:| |Reciprocal|:heavy_check_mark:| |ReduceL1|:heavy_check_mark:| |ReduceL2|:heavy_check_mark:| |ReduceLogSum|:heavy_check_mark:| |ReduceLogSumExp|:heavy_check_mark:| |ReduceMax|:heavy_check_mark:| |ReduceMean|:heavy_check_mark:| |ReduceMin|:heavy_check_mark:| |ReduceProd|:heavy_check_mark:| |ReduceSum|:heavy_check_mark:| |ReduceSumSquare|:heavy_check_mark:| |RegexFullMatch|:heavy_check_mark:| |Relu|:heavy_check_mark:| |Reshape|:heavy_check_mark:| |Resize|:heavy_check_mark:| |ReverseSequence|:heavy_check_mark:| |RNN|:heavy_check_mark:| |RoiAlign|:heavy_check_mark:| |RotaryEmbedding|:heavy_check_mark:| |Round|:heavy_check_mark:| |ScaleAndTranslate|:heavy_check_mark:| |Scatter|:heavy_check_mark:| |ScatterElements|:heavy_check_mark:| |ScatterND|:heavy_check_mark:| |Scan|:heavy_check_mark:| |Selu|:heavy_check_mark:| |SequenceAt|:heavy_check_mark:| |SequenceConstruct|:heavy_check_mark:| |SequenceEmpty|:heavy_check_mark:| |SequenceErase|:heavy_check_mark:| |SequenceInsert|:heavy_check_mark:| |SequenceLength|:heavy_check_mark:| |Shape|:heavy_check_mark:| |Shrink|:heavy_check_mark:| |Sigmoid|:heavy_check_mark:| |Sign|:heavy_check_mark:| |Sinh|:heavy_check_mark:| |Sin|:heavy_check_mark:| |Size|:heavy_check_mark:| |Slice|:heavy_check_mark:| |Softmax|:heavy_check_mark:| |SoftmaxCrossEntropyLoss|:heavy_check_mark:| |Softplus|:heavy_check_mark:| |Softsign|:heavy_check_mark:| |SpaceToDepth|:heavy_check_mark:| |Split|:heavy_check_mark:| |SplitToSequence|:heavy_check_mark:| |Sqrt|:heavy_check_mark:| |Squeeze|:heavy_check_mark:| |STFT|:white_check_mark:| |StringConcat|:heavy_check_mark:| |StringNormalizer|:heavy_check_mark:| |StringSplit|:heavy_check_mark:| |Sub|:heavy_check_mark:| |Sum|:heavy_check_mark:| |Tan|:heavy_check_mark:| |Tanh|:heavy_check_mark:| |TensorScatter|:heavy_check_mark:| |TfIdfVectorizer|:white_check_mark:| |ThresholdedRelu|:heavy_check_mark:| |Tile|:heavy_check_mark:| |TopK|:heavy_check_mark:| |Transpose|:heavy_check_mark:| |Trilu|:heavy_check_mark:| |Unique|:heavy_check_mark:| |Unsqueeze|:heavy_check_mark:| |Upsample|:heavy_check_mark:| |Where|:heavy_check_mark:| |Xor|:heavy_check_mark:|

    </div></details>

flatbuffer_direct execution path

Currently, the flatbuffer_direct backend is faster and has a higher success rate than the default tf_converter backend. The simplest conversion command for flatbuffer_direct outputs only a LiteRT model, but if you add --flatbuffer_direct_output_saved_model, it will output a saved_model as before. However, what is different from the previous behavior is that it will build the graph of the saved_model from the LiteRT model.

[!IMPORTANT] Starting with onnx2tf v2.4.0, tf_converter will be deprecated and the default backend will be switched to flatbuffer_direct. With the v2.3.3 update, all backward compatible conversion options have been migrated to flatbuffer_direct, so I will only be doing minor bug fixes until April. If you provide us with ONNX sample models, I will consider incorporating them into flatbuffer_direct. I'll incorporate ai-edge-quantizer when I feel like it, but that will probably be about 10 years from now.

<img width="1390" height="680" alt="image" src="https://github.com/user-attachments/assets/04c5d8e2-2465-4dac-b3ea-37d7e7f987cc" />

When --tflite_backend flatbuffer_direct is selected, onnx2tf now uses a direct fast path for both ONNX input and `-it/--input_tf

Related Skills

View on GitHub
GitHub Stars940
CategoryDevelopment
Updated12h ago
Forks97

Languages

Python

Security Score

100/100

Audited on Mar 26, 2026

No findings