26、使用PyTorch在Python中实现二维函数。在定义域[0, 2]²上可视化该函数。从定义域[0, 2]²中随机抽取100个数据点,将其用作训练数据。实现一个具有常数均值函数和Matérn 5/2核的高斯过程(GP)模型,其中输出尺度用gpytorch.kernels.ScaleKernel对象实现。在初始化核对象时不指定ard_num_dims参数或将其设为None。使用梯度下降法训练GP模型的超参数,并在训练后检查长度尺度。重新定义GP模型类,这次将ard_num_dims设为2。
以下是实现步骤及代码示例:
- 实现二维函数:
import torch
def f(x):
return (
torch.sin(5 * x[..., 0] / 2 - 2.5) * torch.cos(2.5 - 5 * x[..., 1])
+ (5 * x[..., 1] / 2 + 0.5) ** 2 / 10
) / 5 + 0.2
- 可视化函数:
import matplotlib.pyplot as plt
lb = 0
ub = 2
xs = torch.linspace(lb, ub, 101)
x1, x2 = torch.meshgrid(xs, xs)
xs = torch.vstack((x1.flatten(), x2.flatten())).transpose(-1, -2)
ys = f(xs)
plt.imshow(ys.reshape(101, 101).T, origin='lower', extent=[lb, ub, lb, ub])
plt.show()
- 随机抽取100个数据点作为训练数据:
torch.manual_seed(0)
train_x = torch.rand(size=(100, 2)) * 2
train_y = f(train_x)
- 实现GP模型:
import gpytorch
class GPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super().__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(
gpytorch.kernels.MaternKernel(
nu=2.5,
ard_num_dims=None
)
)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = GPModel(train_x, train_y, likelihood)
- 训练超参数并检查长度尺度:
import torch.optim as optim
model.train()
likelihood.train()
optimizer = optim.Adam(model.parameters(), lr=0.1)
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(500):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
optimizer.step()
model.eval()
likelihood.eval()
print(model.covar_module.base_kernel.lengthscale)
- 重新定义GP模型类,设置ard_num_dims = 2:
class ARDGPModel(gpytorch.models.ExactGP):
def __init__(sel

最低0.47元/天 解锁文章
39

被折叠的 条评论
为什么被折叠?



