- TensorRT7.0支持的ONNX算子列表
https://github.com/onnx/onnx-tensorrt/blob/84b5be1d6fc03564f2c0dba85a2ee75bad242c2e/oper
ators.md
Operator | Supported? | Restrictions |
---|---|---|
Abs | Y | |
Acos | Y | |
Acosh | Y | |
Add | Y | |
And | Y | |
ArgMax | Y | |
ArgMin | Y | |
Asin | Y | |
Asinh | Y | |
Atan | Y | |
Atanh | Y | |
AveragePool | Y | 2D or 3D Pooling only |
BatchNormalization | Y | |
BitShift | N | |
Cast | Y | Cast is only supported for TRT types |
Ceil | Y | |
Clip | Y | min and max clip values must be an initializer |
Compress | N | |
Concat | Y | |
ConcatFromSequence N | ||
Constant | Y | |
ConstantOfShape | Y | |
Conv | Y | 2D or 3D convolutions only |
ConvInteger | N | |
ConvTranspose | Y | 2D or 3D deconvolutions only. Weights must be an initializer |
Cos | Y | |
Cosh | Y | |
CumSum | N | |
DepthToSpace | Y | |
DequantizeLinear | Y | Scales and zero-point value must be initializers |
Det | N | |
Div | Y | |
Dropout | N | |
Elu | Y | |
Equal | Y | |
Erf | Y | |
Exp | Y | |
Expand | Y | |
EyeLike | N | |
Flatten | Y | |
Floor | Y | |
Gather | Y | |
GatherElements | N | |
GatherND | N | |
Gemm | Y | |
GlobalAveragePool | Y | |
GlobalLpPool | N | |
GlobalMaxPool | Y | |
Greater | Y | |
GRU | Y | |
HardSigmoid | Y | |
Hardmax | N | |
Identity | Y | |
If | N | |
ImageScaler | Y | |
InstanceNormalization | Y | Scales and biases must be an initializer |
IsInf | N | |
IsNaN | N | |
LeakyRelu | Y | |
Less | Y | |
Log | Y | |
LogSoftmax | Y | |
Loop | Y | |
LRN | Y | |
LSTM | Y | |
LpNormalization | N | |
LpPool | N | |
MatMul | Y | |
MatMulInteger | N | |
Max | Y | |
MaxPool | Y | |
MaxRoiPool | N | |
MaxUnpool | N | |
Mean | Y | |
Min | Y | |
Mod | N | |
Mul | Y | |
Multinomial | N | |
Neg | Y | |
NonMaxSuppression | N | |
NonZero | N | |
Not | Y | |
OneHot | N | |
Or | Y | |
Pad | Y | Zero-padding on last 2 dimensions only |
ParametricSoftplus | Y | |
Pow | Y | |
PRelu | Y | |
QLinearConv | N | |
QLinearMatMul | N | |
QuantizeLinear | Y | Scales and zero-point value must be initializers |
RNN | N | |
RandomNormal | N | |
RandomNormalLike | N | |
RandomUniform | Y | |
RandomUniformLike | Y | |
Range | Y | Float inputs are only supported if start, limit and delta inputs are initializers |
Reciprocal | N | |
ReduceL1 | Y | |
ReduceL2 | Y | |
ReduceLogSum | Y | |
ReduceLogSumExp | Y | |
ReduceMax | Y | |
ReduceMean | Y | |
ReduceMin | Y | |
ReduceProd | Y | |
ReduceSum | Y | |
ReduceSumSquare | Y | |
Relu | Y | |
Reshape | Y | |
Resize | Y | Asymmetric coordinate transformation mode only. Nearest or Linear resizing mode only. "floor" mode only for resize_mode attribute. |
ReverseSequence | N | |
RNN | Y | |
RoiAlign | N | |
Round | N | |
ScaledTanh | Y | |
Scan | Y | |
Scatter | N | |
ScatterElements | N | |
ScatterND | N | |
Selu | Y | |
SequenceAt | N | |
SequenceConstruct | N | |
SequenceEmpty | N | |
SequenceErase | N | |
SequenceInsert | N | |
SequenceLength | N | |
Shape | Y | |
Shrink | N | |
Sigmoid | Y | |
Sign | N | |
Sin | Y | |
Sinh | Y | |
Size | Y | |
Slice | Y | Slice axes must be an initializer |
Softmax | Y | |
Softplus | Y | |
Softsign | Y | |
SpaceToDepth | Y | |
Split | Y | |
SplitToSequence | N | |
Sqrt | Y | |
Squeeze | Y | |
StringNormalizer | N | |
Sub | Y | |
Sum | Y | |
Tan | Y | |
Tanh | Y | |
TfIdfVectorizer | N | |
ThresholdedRelu | Y | |
Tile | Y | |
TopK | Y | |
Transpose | Y | |
Unique | N | |
Unsqueeze | Y | |
Upsample | Y | |
Where | Y | |
Xor | N |
- TensorRT8.2支持的ONNX算子列表
onnx-tensorrt/operators.md at main · onnx/onnx-tensorrt · GitHub
Operator | Supported | Supported Types | Restrictions |
---|---|---|---|
Abs | Y | FP32, FP16, INT32 | |
Acos | Y | FP32, FP16 | |
Acosh | Y | FP32, FP16 | |
Add | Y | FP32, FP16, INT32 | |
And | Y | BOOL | |
ArgMax | Y | FP32, FP16 | |
ArgMin | Y | FP32, FP16 | |
Asin | Y | FP32, FP16 | |
Asinh | Y | FP32, FP16 | |
Atan | Y | FP32, FP16 | |
Atanh | Y | FP32, FP16 | |
AveragePool | Y | FP32, FP16, INT8, INT32 | 2D or 3D Pooling only |
BatchNormalization | Y | FP32, FP16 | |
BitShift | N | ||
Cast | Y | FP32, FP16, INT32, INT8, BOOL | |
Ceil | Y | FP32, FP16 | |
Celu | Y | FP32, FP16 | |
Clip | Y | FP32, FP16, INT8 | |
Compress | N | ||
Concat | Y | FP32, FP16, INT32, INT8, BOOL | |
ConcatFromSequence | N | ||
Constant | Y | FP32, FP16, INT32, INT8, BOOL | |
ConstantOfShape | Y | FP32 | |
Conv | Y | FP32, FP16, INT8 | 2D or 3D convolutions only. Weights W must be an initailizer |
ConvInteger | N | ||
ConvTranspose | Y | FP32, FP16, INT8 | 2D or 3D deconvolutions only. Weights W must be an initializer |
Cos | Y | FP32, FP16 | |
Cosh | Y | FP32, FP16 | |
CumSum | Y | FP32, FP16 | axis must be an initializer |
DepthToSpace | Y | FP32, FP16, INT32 | |
DequantizeLinear | Y | INT8 | x_zero_point must be zero |
Det | N | ||
Div | Y | FP32, FP16, INT32 | |
Dropout | Y | FP32, FP16 | |
DynamicQuantizeLinear | N | ||
Einsum | Y | FP32, FP16 | Ellipsis and diagonal operations are not supported. Broadcasting between inputs is not supported |
Elu | Y | FP32, FP16, INT8 | |
Equal | Y | FP32, FP16, INT32 | |
Erf | Y | FP32, FP16 | |
Exp | Y | FP32, FP16 | |
Expand | Y | FP32, FP16, INT32, BOOL | |
EyeLike | Y | FP32, FP16, INT32, BOOL | |
Flatten | Y | FP32, FP16, INT32, BOOL | |
Floor | Y | FP32, FP16 | |
Gather | Y | FP32, FP16, INT8, INT32 | |
GatherElements | Y | FP32, FP16, INT8, INT32 | |
GatherND | Y | FP32, FP16, INT8, INT32 | |
Gemm | Y | FP32, FP16, INT8 | |
GlobalAveragePool | Y | FP32, FP16, INT8 | |
GlobalLpPool | Y | FP32, FP16, INT8 | |
GlobalMaxPool | Y | FP32, FP16, INT8 | |
Greater | Y | FP32, FP16, INT32 | |
GreaterOrEqual | Y | FP32, FP16, INT32 | |
GRU | Y | FP32, FP16 | For bidirectional GRUs, activation functions must be the same for both the forward and reverse pass |
HardSigmoid | Y | FP32, FP16, INT8 | |
Hardmax | N | ||
Identity | Y | FP32, FP16, INT32, INT8, BOOL | |
If | Y | FP32, FP16, INT32, BOOL | Output tensors of the two conditional branches must have broadcastable shapes, and must have different names |
ImageScaler | Y | FP32, FP16 | |
InstanceNormalization | Y | FP32, FP16 | Scales scale and biases B must be initializers. Input rank must be >=3 & <=5 |
IsInf | N | ||
IsNaN | Y | FP32, FP16, INT32 | |
LeakyRelu | Y | FP32, FP16, INT8 | |
Less | Y | FP32, FP16, INT32 | |
LessOrEqual | Y | FP32, FP16, INT32 | |
Log | Y | FP32, FP16 | |
LogSoftmax | Y | FP32, FP16 | |
Loop | Y | FP32, FP16, INT32, BOOL | |
LRN | Y | FP32, FP16 | |
LSTM | Y | FP32, FP16 | For bidirectional LSTMs, activation functions must be the same for both the forward and reverse pass |
LpNormalization | Y | FP32, FP16 | |
LpPool | Y | FP32, FP16, INT8 | |
MatMul | Y | FP32, FP16 | |
MatMulInteger | N | ||
Max | Y | FP32, FP16, INT32 | |
MaxPool | Y | FP32, FP16, INT8 | 2D or 3D pooling only. Indices output tensor unsupported |
MaxRoiPool | N | ||
MaxUnpool | N | ||
Mean | Y | FP32, FP16, INT32 | |
MeanVarianceNormalization | N | ||
Min | Y | FP32, FP16, INT32 | |
Mod | N | ||
Mul | Y | FP32, FP16, INT32 | |
Multinomial | N | ||
Neg | Y | FP32, FP16, INT32 | |
NegativeLogLikelihoodLoss | N | ||
NonMaxSuppression | Y [EXPERIMENTAL] | FP32, FP16 | Inputs max_output_boxes_per_class , iou_threshold , and score_threshold must be initializers. Output has fixed shape and is padded to [max_output_boxes_per_class , 3]. |
NonZero | N | ||
Not | Y | BOOL | |
OneHot | N | ||
Or | Y | BOOL | |
Pad | Y | FP32, FP16, INT8, INT32 | |
ParametricSoftplus | Y | FP32, FP16, INT8 | |
Pow | Y | FP32, FP16 | |
PRelu | Y | FP32, FP16, INT8 | |
QLinearConv | N | ||
QLinearMatMul | N | ||
QuantizeLinear | Y | FP32, FP16 | y_zero_point must be 0 |
RandomNormal | N | ||
RandomNormalLike | N | ||
RandomUniform | Y | FP32, FP16 | seed value is ignored by TensorRT |
RandomUniformLike | Y | FP32, FP16 | seed value is ignored by TensorRT |
Range | Y | FP32, FP16, INT32 | Floating point inputs are only supported if start , limit , and delta inputs are initializers |
Reciprocal | N |