1. 从GAN到CGAN
GAN的训练数据是没有标签的,如果我们要做有标签的训练,则需要用到CGAN。
对于图像来说,我们既要让输出的图片真实,也要让输出的图片符合标签c。Discriminator输入便被改成了同时输入c和x,输出要做两件事情,一个是判断x是否是真实图片,另一个是x和c是否是匹配的。
在下面两个情况中,左边虽然输出图片清晰,但不符合c;右边输出图片不真实。因此两种情况中D的输出都会是0。


我们来看下简单的示例代码:
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
import torchvision
from torchvision import transforms
from torch.utils import data
import os
import glob
from PIL import Image
# 独热编码
# 输入x代表默认的torchvision返回的类比值,class_count类别值为10
def one_hot(x, class_count=10):
return torch.eye(class_count)[x, :] # 切片选取,第一维选取第x个,第二维全要
transform =transforms.Compose([transforms.ToTensor(),
transforms.Normalize(0.5, 0.5)])
dataset = torchvision.datasets.MNIST('data',
train=True,
transform=transform,
target_transform=one_hot,
download=False)
dataloader = data.DataLoader(dataset, batch_size=64, shuffle=True)
# 定义生成器
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.linear1 = nn.Linear(10, 128 * 7 * 7)
self.bn1 = nn.BatchNorm1d(128 * 7 * 7)
self.linear2 = nn.Linear(100, 128 * 7 * 7)
self.bn2 = nn.BatchNorm1d(128 * 7 * 7)
self.deconv1 = nn.ConvTranspose2d(256, 128,
kernel_size=(3, 3),
padding=1)
self.bn3 = nn.BatchNorm2d(128)
self.deconv2 = nn.ConvTranspose2d(128, 64,
kernel_size=(4, 4),
stride=2,
padding=1)
self.bn4 = nn.BatchNorm2d(64)
self.deconv3 = nn.ConvTranspose2d(64, 1,
kernel_size=(4, 4),
stride=2,
padding=1)
def forward(self, x1, x2):
x1 = F.relu(self.linear1(x1))
x1 = self.bn1(x1)
x1 = x1.view(-1, 128, 7, 7)
x2 = F.relu(self.linear2(x2))
x2 = self.bn2(x2)
x2 = x2.view(-1, 128, 7, 7)
x = torch.cat([x1, x2], axis=1)
x = F.relu(self.deconv1(x))
x = self.bn3(x)
x = F.relu(self.deconv2(x))
x = self.bn4(x)
x = torch.tanh(self.deconv3(x))
return x
# 定义判别器
# input:1,28,28的图片以及长度为10的condition
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.linear = nn.Linear(10, 1*28*28)
self.conv1 = nn.Conv2d(2, 64, kernel_size=3, stride=2)
self.conv2 = nn.Conv2d(64, 128, kernel_size=3, stride=2)
self.bn = nn.BatchNorm2d(128)
self.fc = nn.Linear(128*6*6, 1) # 输出一个概率值
def forward(self, x1, x2):
x1 =F.leaky_relu(self.linear(x1))
x1 = x1.view(-1, 1, 28, 28)
x = torch.cat([x1, x2], axis=1)
x = F.dropout2d(F.leaky_relu(self.conv1(x)))
x = F.dropout2d(F.leaky_relu(self.conv2(x)))
x = self.bn(x)
x = x.view(-1, 128*6*6)
x = torch.sigmoid(self.fc(x))
return x
# 初始化模型
device = 'cuda' if torch.cuda.is_available() else 'cpu'
gen = Generator().to(device)
dis = Discriminator().to(device)
# 损失计算函数
loss_function = torch.nn.BCELoss()
# 定义优化器
d_optim = torch.optim.Adam(dis.parameters(), lr=1e-5)
g_optim = torch.optim.Adam(gen.parameters(), lr=1e-4)
# 定义可视化函数
def generate_and_save_images(model, epoch, label_input, noise_input):
predictions = np.squeeze(model(label_input, noise_input).cpu().numpy())
fig = plt.figure(figsize=(4, 4))
for i in range(prediction

本文介绍了从基本的GAN概念扩展到条件GAN(CGAN)的应用,包括MNIST数据集的CGAN实例,以及Pix2Pix、Pix2PixHD和CycleGAN在图像转换任务中的实现,如黑白画到彩色、平面到立体和风格转换。通过代码展示了如何结合L1和GAN loss,以及多级鉴别器和循环一致性在这些高级模型中的应用。
最低0.47元/天 解锁文章
2537

被折叠的 条评论
为什么被折叠?



