记录汇总一下读code过程中遇到的函数。
- torch
torch.gt(): greater than
FlopCountAnalysis(): count flops(evaluating model flops)
torch.numel(): number of elements
torch.cuda.synchronize(gpu): wait for gpu to finish kernels(used in timing)
torch.lazylinear(): needs to be initialized in first call of forward(), needs to specify out_features
torch.nn.init: a bunch of initializations of model parameters
torch.amp: automatic mixed precision(use GradScale to prevent gradient from underflow)
torch.flatten(): flatten a tensor, may start from a given dimension
silu(): Swish activation function
torch.multiply(): alias for torch.mul(), multiplication with broadcasting
- numpy
np.percentile(): find percentile in an array, used for elinimating outliers
- python string
str.startswith(): prefix matching
- multiprocessing
mp.Manager(): generates a space for shared instances.
mp.Process(target, args): create process which executes function "target", with arguments args
process.start(): start a process
process.join():
- sklearn
metrics.roc_auc_score: compute the Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores.
- tensorboard
summarywriter.add_scalar():
本文记录了在读代码时遇到的一些重要函数,包括PyTorch中的torch.gt()用于比较,FlopCountAnalysis计算模型FLOPs,torch.numel()获取元素数量,torch.cuda.synchronize()等待GPU完成操作,torch.nn.init初始化模型参数,torch.amp实现混合精度训练,numpy的np.percentile()计算百分位数,Python字符串的str.startswith()进行前缀匹配,multiprocessing的mp.Process创建进程,sklearn的metrics.roc_auc_score计算ROC_AUC,以及tensorboard的summarywriter.add_scalar()添加标量日志。

被折叠的 条评论
为什么被折叠?



