语法总结(Summary of the Grammar)

语句

语句语法
语句 → 表达式 ; 可选
语句 → 声明 ; 可选
语句 → 循环语句 ; 可选
语句 → 分支语句 ; 可选
语句 → 标记语句(Labeled Statement)
语句 → 控制转移语句 ; 可选
语句 → 延迟语句 ; 可选

语句 → 执行语句 ; 可选

多条语句(Statements) → 语句 多条语句(Statements) 可选

循环语句语法
循环语句 → for语句
循环语句 → for-in语句
循环语句 → while语句
循环语句 → repeat-while语句

For 循环语法
for语句 → for for初始条件 可选 ; 表达式 可选 ; 表达式 可选 代码块
for语句 → for ( for初始条件 可选 ; 表达式 可选 ; 表达式 可选 ) 代码块
for初始条件 → 变量声明 | 表达式集

For-In 循环语法
for-in语句 → for case 可选 模式 in 表达式 代码块 where从句 可选

While 循环语法
while语句 → while 条件从句 代码块
条件从句 → 表达式

条件从句 → 表达式 , 表达式集

条件从句 → 表达式集

条件从句 → 可用条件 (availability-condition) | 表达式集

条件集 → 条件 | 条件 , 条件集

条件 → 可用条件(availability-condition) | 个例条件(case-condition) | 可选绑定条件(optional-binding-condition)

个例条件(case-condition) → case 模式 构造器 where从句可选

可选绑定条件(optional-binding-condition) → 可选绑定头(optional-binding-head) 可选绑定连续集(optional-binding-continuation-list) 可选 where从句 可选

可选绑定头(optional-binding-head) → let 模式 构造器 | var 模式 构造器

可选绑定连续集(optional-binding-contiuation-list) → 可选绑定连续(optional-binding-contiuation) | 可选绑定连续(optional-binding-contiuation)  可选绑定连续集(optional-binding-contiuation-list)

可选绑定连续(optional-binding-continuation) → 模式 构造器 | 可选绑定头(optional-binding-head)

Repeat-While语句语法 repeat-while-statement → repeat 代码块 while 表达式

分支语句语法
分支语句 → if语句

分支语句 → guard语句

分支语句 → switch语句

If语句语法
if语句 → if 条件从句 代码块 else从句(Clause) 可选

else从句(Clause) → else 代码块 | else if语句

Guard 语句语法 guard语句 → guard 条件从句 else 代码块

Switch语句语法
switch语句 → switch 表达式 { SwitchCase 可选 }
SwitchCase集 → SwitchCase SwitchCase集 可选
SwitchCase → case标签 多条语句(Statements) | default标签 多条语句(Statements)
SwitchCase → case标签 ; | default标签 ;
case标签 → case case项集 :
case项集 → 模式 where-clause 可选 | 模式 where-clause 可选 , case项集
default标签 → default :
where从句 → where where表达式
where表达式 → 表达式

标记语句语法
标记语句(Labeled Statement) → 语句标签 循环语句 | 语句标签 if语句 | 语句标签 switch语句 语句标签 →标签名称 :
标签名称 → 标识符

控制传递语句(Control Transfer Statement) 语法
控制传递语句 → break语句
控制传递语句 → continue语句
控制传递语句 → fallthrough语句
控制传递语句 → return语句 控制传递语句 → throw语句

Break 语句语法
break语句 → break 标签名称 可选

Continue 语句语法
continue语句 → continue 标签名称 可选

Fallthrough 语句语法
fallthrough语句 → fallthrough

Return 语句语法
return语句 → return 表达式 可选

可用条件(Availability Condition)语法

可用条件(availability-condition) → #available ( 多可用参数(availability-arguments) )

多可用参数(availability- arguments) → 可用参数(availability-argument)|可用参数(availability-argument) ,多可用参数(availability-arguments)

可用参数(availability- argument) → 平台名(platform-name) 平台版本(platform-version)

可用参数(availability- argument) → *

平台名 → iOS | iOSApplicationExtension

平台名 → OSX | OSXApplicationExtension

平台名 → watchOS

平台版本 → 十进制数(decimal-digits)

平台版本 → 十进制数(decimal-digits) . 十进制数(decimal-digits)

平台版本 → 十进制数(decimal-digits) . 十进制数(decimal-digits) . 十进制数(decimal-digits))

抛出语句(Throw Statement)语法

抛出语句(throw-statement) → throw 表达式(expression)

延迟语句 (defer-statement)语法

延迟语句(defer-statement) → defer 代码块

执行语句(do-statement)语法

执行语句(do-statement) → do 代码块 catch-clauses 可选

catch-clauses → catch-clause catch-clauses 可选

catch-clauses → catch 模式(pattern) 可选 where-clause 可选 代码块(code-block) 可选

泛型参数

泛型形参从句(Generic Parameter Clause) 语法
泛型参数从句 → < 泛型参数集 约束从句 可选 >
泛型参数集 → 泛形参数 | 泛形参数 , 泛型参数集
泛形参数 → 类型名称
泛形参数 → 类型名称 : 类型标识
泛形参数 → 类型名称 : 协议合成类型
约束从句 → where 约束集
约束集 → 约束 | 约束 , 约束集
约束 → 一致性约束 | 同类型约束
一致性约束 → 类型标识 : 类型标识
一致性约束 → 类型标识 : 协议合成类型
同类型约束 → 类型标识 == 类型

泛型实参从句语法
(泛型参数从句Generic Argument Clause) → < 泛型参数集 >
泛型参数集 → 泛型参数 | 泛型参数 , 泛型参数集
泛型参数 → 类型

声明 (Declarations)

声明语法
声明 → 导入声明
声明 → 常量声明
声明 → 变量声明
声明 → 类型别名声明
声明 → 函数声明
声明 → 枚举声明
声明 → 结构体声明
声明 → 类声明
声明 → 协议声明
声明 → 构造器声明
声明 → 析构器声明
声明 → 扩展声明
声明 → 下标脚本声明
声明 → 运算符声明
声明(Declarations)集 → 声明 声明(Declarations)集 可选

顶级(Top Level) 声明语法
顶级声明 → 多条语句(Statements) 可选

代码块语法
代码块 → { 多条语句(Statements) 可选 }

导入(Import)声明语法
导入声明 → 属性(Attributes)集 可选 import 导入类型 可选 导入路径
导入类型 → typealias | struct | class | enum | protocol | var | func
导入路径 → 导入路径标识符 | 导入路径标识符 . 导入路径
导入路径标识符 → 标识符 | 运算符

常数声明语法
常量声明 → 属性(Attributes)集 可选 声明修改符(Modifiers)集 可选 let 模式构造器集
模式构造器集 → 模式构造器 | 模式构造器 , 模式构造器集
模式构造器 → 模式 构造器 可选
构造器 → = 表达式

变量声明语法
变量声明 → 变量声明头(Head) 模式构造器集
变量声明 → 变量声明头(Head) 变量名 类型注解 代码块
变量声明 → 变量声明头(Head) 变量名 类型注解 getter-setter块
变量声明 → 变量声明头(Head) 变量名 类型注解 getter-setter关键字(Keyword)块
变量声明 → 变量声明头(Head) 变量名 类型注解 构造器 可选 willSet-didSet代码块
变量声明头(Head) → 属性(Attributes)集 可选 声明修改符(Modifers)集 可选 var
变量名称 → 标识符
getter-setter块 → { getter从句 setter从句 可选 }
getter-setter块 → { setter从句 getter从句 }
getter从句 → 属性(Attributes)集 可选 get 代码块
setter从句 → 属性(Attributes)集 可选 set setter名称 可选 代码块
setter名称 → ( 标识符 )
getter-setter关键字(Keyword)块 → { getter关键字(Keyword)从句 setter关键字(Keyword)从句 可选 }
getter-setter关键字(Keyword)块 → { setter关键字(Keyword)从句 getter关键字(Keyword)从句 }
getter关键字(Keyword)从句 → 属性(Attributes)集 可选 get
setter关键字(Keyword)从句 → 属性(Attributes)集 可选 set
willSet-didSet代码块 → { willSet从句 didSet从句 可选 }
willSet-didSet代码块 → { didSet从句 willSet从句 }
willSet从句 → 属性(Attributes)集 可选 willSet setter名称 可选 代码块
didSet从句 → 属性(Attributes)集 可选 didSet setter名称 可选 代码块

类型别名声明语法
类型别名声明 → 类型别名头(Head) 类型别名赋值
类型别名头(Head) → 属性 可选 访问级别修改符(access-level-modifier) typealias 类型别名名称
类型别名名称 → 标识符
类型别名赋值 → = 类型

函数声明语法
函数声明 → 函数头 函数名 泛型参数从句 可选 函数签名(Signature) 函数体
函数头 → 属性(Attributes)集 可选 声明描述符(Specifiers)集 可选 func
函数名 → 标识符 | 运算符
函数签名(Signature) → parameter-clauses throws 可选 函数结果 可选

函数签名(Signature) → parameter-clauses rethrows 函数结果 可选
函数结果 → -> 属性(Attributes)集 可选 类型
函数体 → 代码块
参数从句 → 参数从句 parameter-clauses 可选
参数从句 → ( ) | ( 参数集 ... 可选 )
参数集 → 参数 | 参数 , 参数集
参数 → inout 可选 let 可选 外部参数名 可选 本地参数名 可选 类型注解 默认参数从句 可选
参数 → inout 可选 var 外部参数名 本地参数名 可选 类型注解 默认参数从句 可选
参数 → 属性(Attributes)集 可选 类型
外部参数名 → 标识符 | _
本地参数名 → 标识符 | _
默认参数从句 → = 表达式

枚举声明语法
枚举声明 → 属性(Attributes)集 可选 访问级别修改器(access-level-modifier) 可选 联合式枚举

枚举声明 → 属性(Attributes)集 可选 访问级别修改器(access-level-modifier) 可选 原始值式枚举(raw-value-style-enum)

联合式枚举 → enum 枚举名 泛型参数从句 可选 类型继承从句(type-inheritance-clause) 可选 { 联合样式枚举成员 可选 }

联合样式枚举成员 → union-style-enum-member 联合样式枚举成员 可选

联合样式枚举成员 → 声明 | 联合式(Union Style)的枚举case从句

联合式(Union Style)的枚举case从句 → 属性(Attributes)集 可选 case 联合式(Union Style)的枚举case集
联合式(Union Style)的枚举case集 → 联合式(Union Style)的case | 联合式(Union Style)的case , 联合式(Union Style)的枚举case集
联合式(Union Style)的枚举case → 枚举的case名 元组类型 可选
枚举名 → 标识符
枚举的case名 → 标识符
原始值式枚举 → enum 枚举名 泛型参数从句 可选 : 类型标识 { 原始值式枚举成员集 可选 }
原始值式枚举成员集 → 原始值式枚举成员 原始值式枚举成员集 可选
原始值式枚举成员 → 声明 | 原始值式枚举case从句
原始值式枚举case从句 → 属性(Attributes)集 可选 case 原始值式枚举case集
原始值式枚举case集 → 原始值式枚举case | 原始值式枚举case , 原始值式枚举case集
原始值式枚举case → 枚举的case名 原始值赋值 可选
原始值赋值 → = 字面量 原始值字面量(raw-value-literal) → 数值字面量 | 字符串字面量 | 布尔字面量

结构体声明语法
结构体声明 → 属性(Attributes)集 可选 访问级别修改器(access-level-modifier) 可选 struct 结构体名称 泛型参数从句 可选 类型继承从句 可选 结构体主体
结构体名称 → 标识符
结构体主体 → { 声明(Declarations)集 可选 }

类声明语法
类声明 → 属性(Attributes)集 可选 访问级别修改器(access-level-modifier) class 类名 泛型参数从句 可选类型继承从句 可选 类主体
类名 → 标识符
类主体 → { 声明(Declarations)集 可选 }

协议(Protocol)声明语法
协议声明 → 属性(Attributes)集 可选访问级别修改器(access-level-modifier) protocol 协议名 类型继承从句 可选 协议主体
协议名 → 标识符
协议主体 → { 协议成员声明(Declarations)集 可选 }
协议成员声明 → 协议属性声明
协议成员声明 → 协议方法声明
协议成员声明 → 协议构造器声明
协议成员声明 → 协议下标脚本声明
协议成员声明 → 协议关联类型声明
协议成员声明(Declarations)集 → 协议成员声明 协议成员声明(Declarations)集 可选

协议属性声明语法
协议属性声明 → 变量声明头(Head) 变量名 类型注解 getter-setter关键字(Keyword)块

协议方法声明语法
协议方法声明 → 函数头 函数名 泛型参数从句 可选 函数签名(Signature)

协议构造器声明语法
协议构造器声明 → 构造器头(Head) 泛型参数从句 可选 参数从句

协议下标脚本声明语法
协议下标脚本声明 → 下标脚本头(Head) 下标脚本结果(Result) getter-setter关键字(Keyword)块

协议关联类型声明语法
协议关联类型声明 → 类型别名头(Head) 类型继承从句 可选 类型别名赋值 可选

构造器声明语法
构造器声明 → 构造器头(Head) 泛型参数从句 可选 参数从句 构造器主体
构造器头(Head) → 属性(Attributes)集 可选 声明修改器集(declaration-modifiers) 可选 init
构造器头(Head) → 属性(Attributes)集 可选 声明修改器集(declaration-modifiers) 可选 init ?

构造器头(Head) → 属性(Attributes)集 可选 声明修改器集(declaration-modifiers) 可选 init !

构造器主体 → 代码块

析构器声明语法
析构器声明 → 属性(Attributes)集 可选 deinit 代码块

扩展(Extension)声明语法
扩展声明 → 访问级别修改器 可选 extension 类型标识 类型继承从句 可选 extension-body
extension-body → { 声明(Declarations)集 可选 }

下标脚本声明语法
下标脚本声明 → 下标脚本头(Head) 下标脚本结果(Result) 代码块
下标脚本声明 → 下标脚本头(Head) 下标脚本结果(Result) getter-setter块
下标脚本声明 → 下标脚本头(Head) 下标脚本结果(Result) getter-setter关键字(Keyword)块
下标脚本头(Head) → 属性(Attributes)集 可选 声明修改器(declaration-modifiers) 可选 subscript 参数从句
下标脚本结果(Result) → -> 属性(Attributes)集 可选 类型

运算符声明语法
运算符声明 → 前置运算符声明 | 后置运算符声明 | 中置运算符声明
前置运算符声明 → prefix 运算符 运算符 { }
后置运算符声明 → postfix 运算符 运算符 { }
中置运算符声明 → infix 运算符 运算符 { 中置运算符属性集 可选 }
中置运算符属性集 → 优先级从句 可选 结和性从句 可选
优先级从句 → precedence 优先级水平
优先级水平 → 数值 0 到 255,首末项包括在内 结和性从句 → associativity 结和性
结和性 → left | right | none

声明修改器语法

声明修改器 →  | 便捷(convenience) | 动态(dynamic) | final | 中置(infix) | lazy | 可变(mutating) |不可变(nonmutating) | 可选(optional) | 改写(override) | 后置 | 前置 | required | static | unownedunowned(safe) | unowned(unsafe) | 弱(weak)

声明修改器 → 访问级别声明器(access-level-modifier)

声明修改集 → 声明修改器 声明修改器集 可选

访问级别修改器 → 内部的 | 内部的(set)

访问级别修改器 → 私有的 | 私有的(set)

访问级别修改器 → 公共的 | 公共的(set)

访问级别修改器集 →访问级别修改器 访问级别修改器集 可选

模式

模式(Patterns) 语法
模式 → 通配符模式 类型注解 可选
模式 → 标识符模式 类型注解 可选
模式 → 值绑定模式
模式 → 元组模式 类型注解 可选

模式 → 枚举个例模式
模式 → 可选模式 模式 → 类型转换模式
模式 → 表达式模式

通配符模式语法
通配符模式 → _

标识符模式语法
标识符模式 → 标识符

值绑定(Value Binding)模式语法
值绑定模式 → var 模式 | let 模式

元组模式语法
元组模式 → ( 元组模式元素集 可选 )
元组模式元素集 → 元组模式元素 | 元组模式元素 , 元组模式元素集
元组模式元素 → 模式

枚举用例模式语法
enum-case-pattern → 类型标识 可选 . 枚举的case名 元组模式 可选

可选模式语法 可选模式 → 识别符模式 ?

类型转换模式语法
类型转换模式(type-casting-pattern) → is模式 | as模式
is模式 → is 类型
as模式 → 模式 as 类型

表达式模式语法
表达式模式 → 表达式

属性

属性语法
属性 → @ 属性名 属性参数从句 可选
属性名 → 标识符
属性参数从句 → ( 平衡令牌集 可选 )
属性(Attributes)集 → 属性 属性(Attributes)集 可选
平衡令牌集 → 平衡令牌 平衡令牌集 可选
平衡令牌 → ( 平衡令牌集 可选 )
平衡令牌 → [ 平衡令牌集 可选 ]
平衡令牌 → { 平衡令牌集 可选 }
平衡令牌 → 任意标识符, 关键字, 字面量或运算符
平衡令牌 → 任意标点除了(, ), [, ], {, 或 }

表达式

表达式语法
表达式 → try-operator 可选 前置表达式 二元表达式集 可选
表达式集 → 表达式 | 表达式 , 表达式集

前置表达式语法
前置表达式 → 前置运算符 可选 后置表达式
前置表达式 → 写入写出(in-out)表达式
写入写出(in-out)表达式 → & 标识符

try表达式语法 try-operator → try | try !

二元表达式语法
二元表达式 → 二元运算符 前置表达式
二元表达式 → 赋值运算符 try运算符 可选 前置表达式
二元表达式 → 条件运算符 try运算符 可选 前置表达式
二元表达式 → 类型转换运算符
二元表达式集 → 二元表达式 二元表达式集 可选

赋值运算符语法
赋值运算符 → =

三元条件运算符语法
三元条件运算符 → ? 表达式 :

类型转换运算符语法
类型转换运算符 → is 类型

类型转换运算符 → as 类型

类型转换运算符 → as ? 类型

类型转换运算符 → as ! 类型

主表达式语法
主表达式 → 标识符 泛型参数从句 可选
主表达式 → 字面量表达式
主表达式 → self表达式
主表达式 → 超类表达式
主表达式 → 闭包表达式
主表达式 → 圆括号表达式
主表达式 → 隐式成员表达式
主表达式 → 通配符表达式

字面量表达式语法
字面量表达式 → 字面量
字面量表达式 → 数组字面量 | 字典字面量
字面量表达式 → __FILE__ | __LINE__ | __COLUMN__ | __FUNCTION__
数组字面量 → [ 数组字面量项集 可选 ]
数组字面量项集 → 数组字面量项 , 可选 | 数组字面量项 , 数组字面量项集
数组字面量项 → 表达式
字典字面量 → [ 字典字面量项集 ] | [ : ]
字典字面量项集 → 字典字面量项 , 可选 | 字典字面量项 , 字典字面量项集
字典字面量项 → 表达式 : 表达式

Self 表达式语法
self表达式 → self
self表达式 → self . 标识符
self表达式 → self [ 表达式 ]
self表达式 → self . init

超类表达式语法
超类表达式 → 超类方法表达式 | 超类下标表达式 | 超类构造器表达式
超类方法表达式 → super . 标识符
超类下标表达式 → super [ 表达式 ]
超类构造器表达式 → super . init

闭包表达式语法
闭包表达式 → { 闭包签名(Signational) 可选 多条语句(Statements) }
闭包签名(Signational) → 参数从句 函数结果 可选 in
闭包签名(Signational) → 标识符集 函数结果 可选 in
闭包签名(Signational) → 捕获(Capature)集 参数从句 函数结果 可选 in
闭包签名(Signational) → 捕获(Capature)集 标识符集 函数结果 可选 in
闭包签名(Signational) → 捕获(Capature)集 in
捕获(Capature)集 → [ 捕获(Capature)说明符 表达式 ]
捕获(Capature)说明符 → weak | unowned | unowned(safe) | unowned(unsafe)

隐式成员表达式语法
隐式成员表达式 → . 标识符

圆括号表达式(Parenthesized Expression)语法
圆括号表达式 → ( 表达式元素集 可选 )
表达式元素集 → 表达式元素 | 表达式元素 , 表达式元素集
表达式元素 → 表达式 | 标识符 : 表达式

通配符表达式语法
通配符表达式 → _

后置表达式语法
后置表达式 → 主表达式
后置表达式 → 后置表达式 后置运算符
后置表达式 → 函数调用表达式
后置表达式 → 构造器表达式
后置表达式 → 显示成员表达式
后置表达式 → 后置self表达式
后置表达式 → 动态类型表达式
后置表达式 → 下标表达式
后置表达式 → 强制取值(Forced Value)表达式
后置表达式 → 可选链(Optional Chaining)表达式

函数调用表达式语法
函数调用表达式 → 后置表达式 圆括号表达式
函数调用表达式 → 后置表达式 圆括号表达式 可选 后置闭包(Trailing Closure)
后置闭包(Trailing Closure) → 闭包表达式

构造器表达式语法
构造器表达式 → 后置表达式 . init

显式成员表达式语法
显示成员表达式 → 后置表达式 . 十进制数字
显示成员表达式 → 后置表达式 . 标识符 泛型参数从句 可选

后置Self 表达式语法
后置self表达式 → 后置表达式 . self

动态类型表达式语法
动态类型表达式 → 后置表达式 . dynamicType

附属脚本表达式语法
附属脚本表达式 → 后置表达式 [ 表达式集 ]

强制取值(Forced Value)语法
强制取值(Forced Value)表达式 → 后置表达式 !

可选链表达式语法
可选链表达式 → 后置表达式 ?

词法结构

标识符语法
标识符 → 标识符头(Head) 标识符字符集 可选
标识符 → 标识符头(Head) 标识符字符集 可选
标识符 → 隐式参数名
标识符集 → 标识符 | 标识符 , 标识符集
标识符头(Head) → Upper- or lowercase letter A through Z

标识符头(Head) → _

标识符头(Head) → U+00A8, U+00AA, U+00AD, U+00AF, U+00B2–U+00B5, or U+00B7–U+00BA
标识符头(Head) → U+00BC–U+00BE, U+00C0–U+00D6, U+00D8–U+00F6, or U+00F8–U+00FF
标识符头(Head) → U+0100–U+02FF, U+0370–U+167F, U+1681–U+180D, or U+180F–U+1DBF
标识符头(Head) → U+1E00–U+1FFF
标识符头(Head) → U+200B–U+200D, U+202A–U+202E, U+203F–U+2040, U+2054, or U+2060–U+206F
标识符头(Head) → U+2070–U+20CF, U+2100–U+218F, U+2460–U+24FF, or U+2776–U+2793
标识符头(Head) → U+2C00–U+2DFF or U+2E80–U+2FFF
标识符头(Head) → U+3004–U+3007, U+3021–U+302F, U+3031–U+303F, or U+3040–U+D7FF
标识符头(Head) → U+F900–U+FD3D, U+FD40–U+FDCF, U+FDF0–U+FE1F, or U+FE30–U+FE44
标识符头(Head) → U+FE47–U+FFFD
标识符头(Head) → U+10000–U+1FFFD, U+20000–U+2FFFD, U+30000–U+3FFFD, or U+40000–U+4FFFD
标识符头(Head) → U+50000–U+5FFFD, U+60000–U+6FFFD, U+70000–U+7FFFD, or U+80000–U+8FFFD
标识符头(Head) → U+90000–U+9FFFD, U+A0000–U+AFFFD, U+B0000–U+BFFFD, or U+C0000–U+CFFFD
标识符头(Head) → U+D0000–U+DFFFD or U+E0000–U+EFFFD
标识符字符 → 数值 0 到 9
标识符字符 → U+0300–U+036F, U+1DC0–U+1DFF, U+20D0–U+20FF, or U+FE20–U+FE2F
标识符字符 → 标识符头(Head)
标识符字符集 → 标识符字符 标识符字符集 可选
隐式参数名 → $ 十进制数字集

字面量语法
字面量 → 数值型字面量 | 字符串字面量 | 布尔字面量 | 空字面量

数值型字面量 → - 可选 整形字面量 | - 可选 浮点型字面量

布尔字面量 → true | false

空字面量 → nil

整型字面量语法
整型字面量 → 二进制字面量
整型字面量 → 八进制字面量
整型字面量 → 十进制字面量
整型字面量 → 十六进制字面量
二进制字面量 → 0b 二进制数字 二进制字面量字符集 可选
二进制数字 → 数值 0 到 1
二进制字面量字符 → 二进制数字 | _
二进制字面量字符集 → 二进制字面量字符 二进制字面量字符集 可选
八进制字面量 → 0o 八进制数字 八进制字符集 可选
八进字数字 → 数值 0 到 7
八进制字符 → 八进制数字 | _
八进制字符集 → 八进制字符 八进制字符集 可选
十进制字面量 → 十进制数字 十进制字符集 可选
十进制数字 → 数值 0 到 9
十进制数字集 → 十进制数字 十进制数字集 可选
十进制字面量字符 → 十进制数字 | _
十进制字面量字符集 → 十进制字面量字符 十进制字面量字符集 可选
十六进制字面量 → 0x 十六进制数字 十六进制字面量字符集 可选
十六进制数字 → 数值 0 到 9, a through f, or A through F
十六进制字符 → 十六进制数字 | _
十六进制字面量字符集 → 十六进制字符 十六进制字面量字符集 可选

浮点型字面量语法
浮点数字面量 → 十进制字面量 十进制分数 可选 十进制指数 可选
浮点数字面量 → 十六进制字面量 十六进制分数 可选 十六进制指数
十进制分数 → . 十进制字面量
十进制指数 → 浮点数e 正负号 可选 十进制字面量
十六进制分数 → . 十六进制数
十六进制字面量字符集可选
十六进制指数 → 浮点数p 正负号 可选 十六进制字面量
浮点数e → e | E
浮点数p → p | P
正负号 → + | -

字符串型字面量语法
字符串字面量 → " 引用文本 "
引用文本 → 引用文本条目 引用文本 可选
引用文本条目 → 转义字符
引用文本条目 → ( 表达式 )
引用文本条目 → 除了"­, \­, U+000A, or U+000D的所有Unicode的字符
转义字符 → /0 | \ | \t | \n | \r | \" | \'
转义字符 → \u { 十六进制标量数字集 }
unicode标量数字集 → Between one and eight hexadecimal digits

运算符语法语法
运算符 → 运算符头 运算符字符集 可选 运算符 → 点运算符头 点运算符字符集 可选
运算符字符 → / | = | - | + | ! | * | % | < | > | & | | | ^ | ~ | ?
运算符头 → U+00A1–U+00A7

运算符头 → U+00A9 or U+00AB

运算符头 → U+00AC or U+00AE

运算符头 → U+00B0–U+00B1, U+00B6, U+00BB, U+00BF, U+00D7, or U+00F7

运算符头 → U+2016–U+2017 or U+2020–U+2027

运算符头 → U+2030–U+203E

运算符头 → U+2041–U+2053

运算符头 → U+2055–U+205E

运算符头 → U+2190–U+23FF

运算符头 → U+2500–U+2775

运算符头 → U+2794–U+2BFF

运算符头 → U+2E00–U+2E7F

运算符头 → U+3001–U+3003

运算符头 → U+3008–U+3030

运算符字符 → 运算符头

运算符字符 → U+0300–U+036F

运算符字符 → U+1DC0–U+1DFF

运算符字符 → U+20D0–U+20FF

运算符字符 → U+FE00–U+FE0F

运算符字符 → U+FE20–U+FE2F

运算符字符 → U+E0100–U+E01EF

运算符字符集 → 运算符字符 运算符字符集可选

点运算符头 → ..

点运算符字符 → . | 运算符字符

点运算符字符集 → 点运算符字符 点运算符字符集 可选

二元运算符 → 运算符
前置运算符 → 运算符
后置运算符 → 运算符

类型

类型语法
类型 → 数组类型 | 字典类型 | 函数类型 | 类型标识符 | 元组类型 | 可选类型 | 隐式解析可选类型 | 协议合成类型 | 元型类型

类型注解语法
类型注解 → : 属性(Attributes)集 可选 类型

类型标识语法
类型标识 → 类型名称 泛型参数从句 可选 | 类型名称 泛型参数从句 可选 . 类型标识符
类型名 → 标识符

元组类型语法
元组类型 → ( 元组类型主体 可选 )
元组类型主体 → 元组类型的元素集 ... 可选
元组类型的元素集 → 元组类型的元素 | 元组类型的元素 , 元组类型的元素集
元组类型的元素 → 属性(Attributes)集 可选 inout 可选 类型 | inout 可选 元素名 类型注解
元素名 → 标识符

函数类型语法
函数类型 → 类型 throws 可选 -> 类型
函数类型 → 类型 rethrows -> 类型

数组类型语法
数组类型 → [ 类型 ]

字典类型语法 字典类型 → [ 类型 : 类型 ]

可选类型语法
可选类型 → 类型 ?

隐式解析可选类型(Implicitly Unwrapped Optional Type)语法
隐式解析可选类型 → 类型 !

协议合成类型语法
协议合成类型 → protocol < 协议标识符集 可选 >
协议标识符集 → 协议标识符 | 协议标识符 , 协议标识符集
协议标识符 → 类型标识符

元(Metatype)类型语法
元类型 → 类型 . Type | 类型 . Protocol

类型继承从句语法

类型继承从句 → : 类条件(class-requirement)) , 类型继承集

类型继承从句 → : 类条件(class-requirement))

类型继承从句 → : 类型继承集

类型继承集 → 类型标识符 | 类型标识符 , 类型继承集

类条件 → class

(venv) D:\Audio2Face\Audio2Face-3D-SDK>trtexec --version &&&& RUNNING TensorRT.trtexec [TensorRT v101401] [b48] # trtexec --version [11/27/2025-14:33:27] [I] TF32 is enabled by default. Add --noTF32 flag to further improve accuracy with some performance cost. === Model Options === --onnx=<file> ONNX model === Build Options === --minShapes=spec Build with dynamic shapes using a profile with the min shapes provided --optShapes=spec Build with dynamic shapes using a profile with the opt shapes provided --maxShapes=spec Build with dynamic shapes using a profile with the max shapes provided --minShapesCalib=spec Calibrate with dynamic shapes using a profile with the min shapes provided --optShapesCalib=spec Calibrate with dynamic shapes using a profile with the opt shapes provided --maxShapesCalib=spec Calibrate with dynamic shapes using a profile with the max shapes provided Note: All three of min, opt and max shapes must be supplied. However, if only opt shapes is supplied then it will be expanded so that min shapes and max shapes are set to the same values as opt shapes. Input names can be wrapped with escaped single quotes (ex: 'Input:0'). Example input shapes spec: input0:1x3x256x256,input1:1x3x128x128 For scalars (0-D shapes), use input0:scalar or simply input0: with nothing after the colon. Each input shape is supplied as a key-value pair where key is the input name and value is the dimensions (including the batch dimension) to be used for that input. Each key-value pair has the key and value separated using a colon (:). Multiple input shapes can be provided via comma-separated key-value pairs, and each input name can contain at most one wildcard ('*') character. --inputIOFormats=spec Type and format of each of the input tensors (default = all inputs in fp32:chw) See --outputIOFormats help for the grammar of type and format list. Note: If this option is specified, please set comma-separated types and formats for all inputs following the same order as network inputs ID (even if only one input needs specifying IO format) or set the type and format once for broadcasting. --outputIOFormats=spec Type and format of each of the output tensors (default = all outputs in fp32:chw) Note: If this option is specified, please set comma-separated types and formats for all outputs following the same order as network outputs ID (even if only one output needs specifying IO format) or set the type and format once for broadcasting. IO Formats: spec ::= IOfmt[","spec] IOfmt ::= type:fmt type ::= "fp32"|"fp16"|"bf16"|"int32"|"int64"|"int8"|"uint8"|"bool" fmt ::= ("chw"|"chw2"|"hwc8"|"chw4"|"chw16"|"chw32"|"dhwc8"| "cdhw32"|"hwc"|"dla_linear"|"dla_hwc4"|"hwc16"|"dhwc")["+"fmt] --memPoolSize=poolspec Specify the size constraints of the designated memory pool(s) Supports the following base-2 suffixes: B (Bytes), G (Gibibytes), K (Kibibytes), M (Mebibytes). If none of suffixes is appended, the defualt unit is in MiB. Note: Also accepts decimal sizes, e.g. 0.25M. Will be rounded down to the nearest integer bytes. In particular, for dlaSRAM the bytes will be rounded down to the nearest power of 2. Pool constraint: poolspec ::= poolfmt[","poolspec] poolfmt ::= pool:size pool ::= "workspace"|"dlaSRAM"|"dlaLocalDRAM"|"dlaGlobalDRAM"|"tacticSharedMem" --profilingVerbosity=mode Specify profiling verbosity. mode ::= layer_names_only|detailed|none (default = layer_names_only). Please only assign once. --avgTiming=M Set the number of times averaged in each iteration for kernel selection (default = 8) --refit Mark the engine as refittable. This will allow the inspection of refittable layers and weights within the engine. --stripWeights Strip weights from plan. This flag works with either refit or refit with identical weights. Default to latter, but you can switch to the former by enabling both --stripWeights and --refit at the same time. --stripAllWeights Alias for combining the --refit and --stripWeights options. It marks all weights as refittable, disregarding any performance impact. Additionally, it strips all refittable weights after the engine is built. --weightless [Deprecated] this knob has been deprecated. Please use --stripWeights --versionCompatible, --vc Mark the engine as version compatible. This allows the engine to be used with newer versions of TensorRT on the same host OS, as well as TensorRT's dispatch and lean runtimes. --pluginInstanceNorm, --pi Set `kNATIVE_INSTANCENORM` to false in the ONNX parser. This will cause the ONNX parser to use a plugin InstanceNorm implementation over the native implementation when parsing. --uint8AsymmetricQuantizationDLA Set `kENABLE_UINT8_AND_ASYMMETRIC_QUANTIZATION_DLA` to true in the ONNX parser. This directs the onnx parser to allow UINT8 as a quantization data type and import zero point values directly without converting to float type or all-zero values. Should only be set with DLA software version >= 3.16. --useRuntime=runtime TensorRT runtime to execute engine. "lean" and "dispatch" require loading VC engine and do not support building an engine. runtime::= "full"|"lean"|"dispatch" --leanDLLPath=<file> External lean runtime DLL to use in version compatible mode. --excludeLeanRuntime When --versionCompatible is enabled, this flag indicates that the generated engine should not include an embedded lean runtime. If this is set, the user must explicitly specify a valid lean runtime to use when loading the engine. --monitorMemory Enable memory monitor report for debugging usage. (default = disabled) --sparsity=spec Control sparsity (default = disabled). Sparsity: spec ::= "disable", "enable", "force" Note: Description about each of these options is as below disable = do not enable sparse tactics in the builder (this is the default) enable = enable sparse tactics in the builder (but these tactics will only be considered if the weights have the right sparsity pattern) force = enable sparse tactics in the builder and force-overwrite the weights to have a sparsity pattern (even if you loaded a model yourself) [Deprecated] this knob has been deprecated. Please use <polygraphy surgeon prune> to rewrite the weights. --noTF32 Disable tf32 precision (default is to enable tf32, in addition to fp32) --fp16 Enable fp16 precision, in addition to fp32 (default = disabled) --bf16 Enable bf16 precision, in addition to fp32 (default = disabled) --int8 Enable int8 precision, in addition to fp32 (default = disabled) --fp8 Enable fp8 precision, in addition to fp32 (default = disabled) --int4 Enable int4 precision, in addition to fp32 (default = disabled) --best Enable all precisions to achieve the best performance (default = disabled) Note: --fp16, --bf16, --int8, --fp8, --int4, --best are deprecated and superseded by strong typing. The AutoCast tool (https://nvidia.github.io/TensorRT-Model-Optimizer/guides/8_autocast.html) can be used to convert the network to be strongly typed. --stronglyTyped Create a strongly typed network. (default = disabled) --directIO [Deprecated] Avoid reformatting at network boundaries. (default = disabled) --precisionConstraints=spec Control precision constraint setting. (default = none) Precision Constraints: spec ::= "none" | "obey" | "prefer" none = no constraints prefer = meet precision constraints set by --layerPrecisions/--layerOutputTypes if possible obey = meet precision constraints set by --layerPrecisions/--layerOutputTypes or fail otherwise --layerPrecisions=spec Control per-layer precision constraints. Effective only when precisionConstraints is set to "obey" or "prefer". (default = none) The specs are read left-to-right, and later ones override earlier ones. Each layer name can contain at most one wildcard ('*') character. Per-layer precision spec ::= layerPrecision[","spec] layerPrecision ::= layerName":"precision precision ::= "fp32"|"fp16"|"bf16"|"int32"|"int8" --layerOutputTypes=spec Control per-layer output type constraints. Effective only when precisionConstraints is set to "obey" or "prefer". (default = none The specs are read left-to-right, and later ones override earlier ones. Each layer name can contain at most one wildcard ('*') character. If a layer has more than one output, then multiple types separated by "+" can be provided for this layer. Per-layer output type spec ::= layerOutputTypes[","spec] layerOutputTypes ::= layerName":"type type ::= "fp32"|"fp16"|"bf16"|"int32"|"int8"["+"type] --layerDeviceTypes=spec Specify layer-specific device type. The specs are read left-to-right, and later ones override earlier ones. If a layer does not have a device type specified, the layer will opt for the default device type. Per-layer device type spec ::= layerDeviceTypePair[","spec] layerDeviceTypePair ::= layerName":"deviceType deviceType ::= "GPU"|"DLA" --decomposableAttentions=spec Specify decomposable attentions by comma-separated names. The specs are read left-to-right, and later ones override earlier ones. Each layer name can contain at most one wildcard ('*') character. --calib=<file> Read INT8 calibration cache file --safe Enable build safety certified engine, --stronglyTyped will be enabled by default with this option. If DLA is enabled, --buildDLAStandalone will be specified --dumpKernelText Dump the kernel text to a file, only available when --safe is enabled --buildDLAStandalone Enable build DLA standalone loadable which can be loaded by cuDLA, when this option is enabled, --allowGPUFallback is disallowed and --skipInference is enabled by default. Additionally, specifying --inputIOFormats and --outputIOFormats restricts I/O data type and memory layout (default = disabled) --allowGPUFallback When DLA is enabled, allow GPU fallback for unsupported layers (default = disabled) --consistency Perform consistency checking on safety certified engine --restricted Enable safety scope checking with kSAFETY_SCOPE build flag --saveEngine=<file> Save the serialized engine --loadEngine=<file> Load a serialized engine --asyncFileReader Load a serialized engine using async stream reader. Should be combined with --loadEngine. --getPlanVersionOnly Print TensorRT version when loaded plan was created. Works without deserialization of the plan. Use together with --loadEngine. Supported only for engines created with 8.6 and forward. --tacticSources=tactics Specify the tactics to be used by adding (+) or removing (-) tactics from the default tactic sources (default = all available tactics). Note: Currently only cuDNN, cuBLAS, cuBLAS-LT, and edge mask convolutions are listed as optional tactics. Tactic Sources: tactics ::= tactic[","tactics] tactic ::= (+|-)lib lib ::= "CUBLAS"|"CUBLAS_LT"|"CUDNN"|"EDGE_MASK_CONVOLUTIONS" |"JIT_CONVOLUTIONS" For example, to disable cudnn and enable cublas: --tacticSources=-CUDNN,+CUBLAS --noBuilderCache Disable timing cache in builder (default is to enable timing cache) --noCompilationCache Disable Compilation cache in builder, and the cache is part of timing cache (default is to enable compilation cache) --errorOnTimingCacheMiss Emit error when a tactic being timed is not present in the timing cache (default = false) --timingCacheFile=<file> Save/load the serialized global timing cache --preview=features Specify preview feature to be used by adding (+) or removing (-) preview features from the default Preview Features: features ::= feature[","features] feature ::= (+|-)flag flag ::= "aliasedPluginIO1003" |"runtimeActivationResize" |"profileSharing0806" --builderOptimizationLevel Set the builder optimization level. (default is 3) A Higher level allows TensorRT to spend more time searching for better optimization strategy. Valid values include integers from 0 to the maximum optimization level, which is currently 5. --maxTactics Set the maximum number of tactics to time when there is a choice of tactics. (default is -1) Larger number of tactics allow TensorRT to spend more building time on evaluating tactics. Default value -1 means TensorRT can decide the number of tactics based on its own heuristic. --hardwareCompatibilityLevel=mode Make the engine file compatible with other GPU architectures. (default = none) Hardware Compatibility Level: mode ::= "none" | "ampere+" | "sameComputeCapability" none = no compatibility ampere+ = compatible with Ampere and newer GPUs sameComputeCapability = compatible with GPUs that have the same Compute Capability version --runtimePlatform=platform Set the target platform for runtime execution. (default = SameAsBuild) When this option is enabled, --skipInference is enabled by default. RuntimePlatfrom: platform ::= "SameAsBuild" | "WindowsAMD64" SameAsBuild = no requirement for cross-platform compatibility. WindowsAMD64 = set the target platform for engine execution as Windows AMD64 system --tempdir=<dir> Overrides the default temporary directory TensorRT will use when creating temporary files. See IRuntime::setTemporaryDirectory API documentation for more information. --tempfileControls=controls Controls what TensorRT is allowed to use when creating temporary executable files. Should be a comma-separated list with entries in the format (in_memory|temporary):(allow|deny). in_memory: Controls whether TensorRT is allowed to create temporary in-memory executable files. temporary: Controls whether TensorRT is allowed to create temporary executable files in the filesystem (in the directory given by --tempdir). For example, to allow in-memory files and disallow temporary files: --tempfileControls=in_memory:allow,temporary:deny If a flag is unspecified, the default behavior is "allow". --maxAuxStreams=N Set maximum number of auxiliary streams per inference stream that TRT is allowed to use to run kernels in parallel if the network contains ops that can run in parallel, with the cost of more memory usage. Set this to 0 for optimal memory usage. (default = using heuristics) --profile Build with dynamic shapes using a profile with the min/max/opt shapes provided. Can be specified multiple times to create multiple profiles with contiguous index. (ex: --profile=0 --minShapes=<spec> --optShapes=<spec> --maxShapes=<spec> --profile=1 ...) --calibProfile Select the optimization profile to calibrate by index. (default = 0) --allowWeightStreaming Enable a weight streaming engine. Must be specified with --stronglyTyped. TensorRT will disable weight streaming at runtime unless --weightStreamingBudget is specified. --markDebug Specify list of names of tensors to be marked as debug tensors. Separate names with a comma --markUnfusedTensorsAsDebugTensors Mark unfused tensors as debug tensors --tilingOptimizationLevel Set the tiling optimization level. (default is 0) A Higher level allows TensorRT to spend more time searching for better optimization strategy. Valid values include integers from 0 to the maximum tiling optimization level(3). --l2LimitForTiling Set the L2 cache usage limit for tiling optimization(default is -1) --remoteAutoTuningConfig Set the remote auto tuning config. Must be specified with --safe. Format: protocol://username[:password]@hostname[:port]?param1=value1&param2=value2 Example: ssh://user:pass@192.0.2.100:22?remote_exec_path=/opt/tensorrt/bin&remote_lib_path=/opt/tensorrt/lib --refitFromOnnx Refit the loaded engine with the weights from the provided ONNX model. The model should be identical to the one used to generate the engine. === Inference Options === --shapes=spec Set input shapes for dynamic shapes inference inputs. Note: Input names can be wrapped with escaped single quotes (ex: 'Input:0'). Example input shapes spec: input0:1x3x256x256, input1:1x3x128x128 For scalars (0-D shapes), use input0:scalar or simply input0: with nothing after the colon. Each input shape is supplied as a key-value pair where key is the input name and value is the dimensions (including the batch dimension) to be used for that input. Each key-value pair has the key and value separated using a colon (:). Multiple input shapes can be provided via comma-separated key-value pairs, and each input name can contain at most one wildcard ('*') character. --loadInputs=spec Load input values from files (default = generate random inputs). Input names can be wrapped with single quotes (ex: 'Input:0') Input values spec ::= Ival[","spec] Ival ::= name":"file Consult the README for more information on generating files for custom inputs. --iterations=N Run at least N inference iterations (default = 10) --warmUp=N Run for N milliseconds to warmup before measuring performance (default = 200) --duration=N Run performance measurements for at least N seconds wallclock time (default = 3) If -1 is specified, inference will keep running unless stopped manually --sleepTime=N Delay inference start with a gap of N milliseconds between launch and compute (default = 0) --idleTime=N Sleep N milliseconds between two continuous iterations(default = 0) --infStreams=N Instantiate N execution contexts to run inference concurrently (default = 1) --exposeDMA Serialize DMA transfers to and from device (default = disabled). --noDataTransfers Disable DMA transfers to and from device (default = enabled). Note some device-to-host data transfers will remain if output dumping is enabled via the --dumpOutput or --exportOutput flags. --useManagedMemory Use managed memory instead of separate host and device allocations (default = disabled). --useSpinWait Actively synchronize on GPU events. This option may decrease synchronization time but increase CPU usage and power (default = disabled) --threads Enable multithreading to drive engines with independent threads or speed up refitting (default = disabled) --useCudaGraph Use CUDA graph to capture engine execution and then launch inference (default = disabled). This flag may be ignored if the graph capture fails. --timeDeserialize Time the amount of time it takes to deserialize the network and exit. --timeRefit Time the amount of time it takes to refit the engine before inference. --separateProfileRun Do not attach the profiler in the benchmark run; if profiling is enabled, a second profile run will be executed (default = disabled) --skipInference Exit after the engine has been built and skip inference perf measurement (default = disabled) --persistentCacheRatio Set the persistentCacheLimit in ratio, 0.5 represent half of max persistent L2 size (default = 0) --useProfile Set the optimization profile for the inference context (default = 0 ). --allocationStrategy=spec Specify how the internal device memory for inference is allocated. Strategy: spec ::= "static"|"profile"|"runtime" static = Allocate device memory based on max size across all profiles. profile = Allocate device memory based on max size of the current profile. runtime = Allocate device memory based on the actual input shapes. --saveDebugTensors Specify list of names of tensors to turn on the debug state and filename to save raw outputs to. These tensors must be specified as debug tensors during build time. Input values spec ::= Ival[","spec] Ival ::= name":"file --saveAllDebugTensors Save all debug tensors to files. Including debug tensors marked by --markDebug and --markUnfusedTensorsAsDebugTensors Multiple file formats can be saved simultaneously. Input values spec ::= format[","format] format ::= "summary"|"numpy"|"string"|"raw" --weightStreamingBudget Set the maximum amount of GPU memory TensorRT is allowed to use for weights. It can take on the following values: -2: (default) Disable weight streaming at runtime. -1: TensorRT will automatically decide the budget. 0-100%: Percentage of streamable weights that reside on the GPU. 0% saves the most memory but will have the worst performance. Requires the '%' character. >=0B: The exact amount of streamable weights that reside on the GPU. Supports the following base-2 suffixes: B (Bytes), G (Gibibytes), K (Kibibytes), M (Mebibytes). === Reporting Options === --verbose Use verbose logging (default = false) --avgRuns=N Report performance measurements averaged over N consecutive iterations (default = 10) --percentile=P1,P2,P3,... Report performance for the P1,P2,P3,... percentages (0<=P_i<=100, 0 representing max perf, and 100 representing min perf; (default = 90,95,99%) --dumpRefit Print the refittable layers and weights from a refittable engine --dumpOutput Print the output tensor(s) of the last inference iteration (default = disabled) --dumpRawBindingsToFile Print the input/output tensor(s) of the last inference iteration to file(default = disabled) --dumpProfile Print profile information per layer (default = disabled) --dumpLayerInfo Print layer information of the engine to console (default = disabled) --dumpOptimizationProfile Print the optimization profile(s) information (default = disabled) --exportTimes=<file> Write the timing results in a json file (default = disabled) --exportOutput=<file> Write the output tensors to a json file (default = disabled) --exportProfile=<file> Write the profile information per layer in a json file (default = disabled) --exportLayerInfo=<file> Write the layer information of the engine in a json file (default = disabled) === System Options === --device=N Select cuda device N (default = 0) --useDLACore=N Select DLA core N for layers that support DLA (default = none) --staticPlugins Plugin library (.so) to load statically (can be specified multiple times) --dynamicPlugins Plugin library (.so) to load dynamically and may be serialized with the engine if they are included in --setPluginsToSerialize (can be specified multiple times) --setPluginsToSerialize Plugin library (.so) to be serialized with the engine (can be specified multiple times) --ignoreParsedPluginLibs By default, when building a version-compatible engine, plugin libraries specified by the ONNX parser are implicitly serialized with the engine (unless --excludeLeanRuntime is specified) and loaded dynamically. Enable this flag to ignore these plugin libraries instead. --safetyPlugins Plugin library (.so) for TensorRT auto safety to manually load safety plugins specified by the command line arguments. Example: --safetyPlugins=/path/to/plugin_lib.so[pluginNamespace1::plugin1,pluginNamespace2::plugin2]. The option can be specified multiple times with different plugin libraries. === Help === --help, -h Print this message [11/27/2025-14:33:27] [E] Model missing or format not recognized &&&& FAILED TensorRT.trtexec [TensorRT v101401] [b48] # trtexec --version
11-28
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值