论文阅读笔记——Enhancing Differential Testing With LLMs For Testing Deep Learning Libraries

DLLens 论文
现有方法在 counterpart 的选择上:选择不同的后端(CPU/GPU)、同一函数的不同实现方式、相似函数签名(tf.math.add 与 tf.add)。通常,这些在底层有共享的实现或内部及其相似甚至相同,因此无法通过差分测试来暴露 API 实现中的漏洞。DLLens 利用跨库 counterpart 或组合不同 counterpart 实现。

输入有效性:DocTer 通过基于规则的方法从 API 文档提取输入约束;TensorScope 分析部分源代码提取输入约束;
路径约束:ACETest 对 API 源代码的子集做静态分析,排除外部函数(如 std::max),可能无法提取到某些完整输入约束。

LLMs 在 DL 库 API 有丰富知识,理论上可以完成 API counterpart 识别和路径约束的提取:

  • LLM 生成的 counterpart 在某些输入条件下无效,导致差分测试出现误报;
  • LLM 生成的路径约束不完整。

DLLens 采取 GPT-4o-mini。
在这里插入图片描述

  • 对于每个 API,DLLens 首先利用类 TitanFuzz 的 prompt 生成 N 个验证输入集,然后应用属性变异来枚举 f 的每个参数的不同值(具体而言,对于一个给定参数,枚举每个属性变异空间 [事先定义] 所有可能属性值来生成一组变异体;对于 float 等非连续参数,为其生成五个变异体;对于字符串或其他类型的参数,不进行变异),由此形成了验证输入集 X \boldsymbol{X} X
  • 利用 X X X、API 签名,合成一个对应体 f ′ f' f
  • 利用 ∣ f ( x ) − f ′ ( x ) ∣ < ϵ |f(x)-f^{'}(x)|<\epsilon f(x)
Physics-informed DeepONets represent a novel approach in the intersection of machine learning and computational physics, particularly for solving parametric partial differential equations (PDEs) by learning their solution operators. These networks integrate physical laws and constraints directly into the training process, thereby enhancing the model's ability to generalize and maintain accuracy across different parameter configurations. DeepONets, or Deep Operator Networks, are designed to approximate nonlinear operators, which are mappings between infinite-dimensional spaces. In the context of PDEs, these operators map input parameters (such as coefficients, boundary conditions, or source terms) to the corresponding solutions of the PDEs. By embedding the physics of the problem into the loss function, physics-informed DeepONets ensure that the learned solutions adhere to the underlying physical principles, even when limited data is available for training. The architecture of a physics-informed DeepONet typically consists of two main components: a branch net and a trunk net. The branch net processes the input parameters, while the trunk net handles the spatial or temporal coordinates where the solution is to be predicted. The outputs of these two networks are combined through an inner product to produce the final output, which represents the solution of the PDE at specific points in the domain. Training a physics-informed DeepONet involves minimizing a loss function that includes both data fidelity terms and terms that enforce the satisfaction of the PDE and its boundary/initial conditions. This is achieved by computing the residuals of the PDE at collocation points using automatic differentiation, a process that does not require explicit knowledge of the solution form. Consequently, the network can be trained without the need for large datasets of precomputed solutions, making it a powerful tool for scenarios where data collection is expensive or time-consuming. One of the key advantages of this method is its ability to handle high-dimensional problems efficiently, as the complexity of the operator approximation scales more favorably with dimensionality compared to traditional discretization-based methods. Moreover, once trained, the DeepONet can rapidly evaluate the solution operator for new parameter values, enabling real-time simulation and uncertainty quantification. ```python # Example of defining a simple DeepONet structure using PyTorch import torch import torch.nn as nn class DeepONet(nn.Module): def __init__(self, branch_layers, trunk_layers): super(DeepONet, self).__init__() self.branch = self._build_network(branch_layers) self.trunk = self._build_network(trunk_layers) def _build_network(self, layers): network = [] for i in range(len(layers)-1): network.append(nn.Linear(layers[i], layers[i+1])) if i < len(layers) - 2: network.append(nn.ReLU()) return nn.Sequential(*network) def forward(self, branch_input, trunk_input): b = self.branch(branch_input) t = self.trunk(trunk_input) return torch.einsum('bi,ti->bt', b, t) # Instantiation of a DeepONet with specified layer sizes model = DeepONet(branch_layers=[10, 40, 40, 40], trunk_layers=[2, 40, 40, 40]) ``` The application of physics-informed DeepONets extends beyond academic interest; they have practical implications in fields such as fluid dynamics, solid mechanics, and financial modeling, where parametric PDEs are prevalent. By leveraging the strengths of deep learning and incorporating physical knowledge, these networks offer a promising avenue for addressing complex, real-world problems that were previously computationally prohibitive.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值