支持的算子
● Caffe
Accuracy, BatchNorm, Clip, Concat, Convolution, Data, Deconvolution, DepthwiseConvolution, DetectionOutput, Dropout, Eltwise, Flatten, InnerProduct, Input, LRN, Normalize, PReLU, Permute, Pooling, Power, PriorBox, ROIPooling, RPN, ReLU, ReLU6, Region, Reorg, Reshape, Resize, Scale, Sigmoid, Slice, Softmax, SoftmaxWithLoss, Split, TanH, Tile, Upsample
● mxnet
Activation, BatchNorm, Concat, Convolution, Copy, Crop, Deconvolution, Dropout, FullyConnected, LeakyReLU, Pooling, RNN, Reshape, SoftmaxActivation, SoftmaxOutput, SwapAxis, UpSampling, _minus_scalar, _mul_scalar, add_n, clip, elemwise_add, transpose
● onnx
Add, AveragePool, BatchNormalization, Concat, Conv, Dropout, Flatten, Gemm, GlobalAveragePool, MaxPool, Relu, Softmax
● Tensorflow
Add, AddN, ArgMax, ArgMin, AudioSpectrogram, AvgPool, ComposedBN, ConcatV2, Conv2D, Conv2DBackpropInput, DecodeWav, DepthwiseConv2dNative, Dropout, Exp, FIFOQueueV2, Flatten, Floor, FusedBatchNorm, GRU, LRN, LSTM, Log, MatMul, MaxPool, Maximum, Mean, Mfcc, Minimum, MirrorPad, Mul, Pad, Pow, RNN, RealDiv, Relu, Relu6, Reshape, ResizeNearestNeighbor, ReverseV2, Rsqrt, Sigmoid, Softmax, Split, Sqrt, StridedSlice, Sub, Sum, Tanh, TopKV2
● tf_lite
ADD, AVERAGE_POOL_2D, CONCATENATION, CONV_2D, DEPTHWISE_CONV_2D, LOGISTIC, MAX_POOL_2D, RESHAPE, SOFTMAX, SQUEEZE, TFLite_Detection_PostProcess

本文详细对比了Caffe、mxnet、onnx、Tensorflow及tf_lite等深度学习框架所支持的算子集,涵盖了从基本数学运算到复杂神经网络层的各种算子,为开发者选择合适的框架提供参考。
1745

被折叠的 条评论
为什么被折叠?



