ubuntu18.04下EnlightenGAN运行过程记录

博客介绍了EnlightenGAN代码的运行、训练与测试过程。运行需创建虚拟环境、下载模型等;测试时要创建特定文件夹并放入图片,训练则需准备数据集。同时还列举了可能出现的错误及解决办法,如CUDA相关错误、内存溢出等,最后展示了训练结果。

运行过程

1、代码:https://github.com/VITA-Group/EnlightenGAN

2、conda创建虚拟环境:

conda create -n enlighten python=3.5

3、进入项目文件夹,打开终端

conda activate enlighten
pip install -r requirement.txt

4、创建文件夹mkdir

mkdir model

5、下载VGG pretrained model,放入model文件夹中。

训练/测试

一、仅测试时:
1、下载pretrained model,放入./checkpoints/enlightening

2、创建文件夹../test_dataset/testA and ../test_dataset/testB(即test_dataset文件夹与项目文件夹同级的位置),将自己要测试的图片放入testA,在testB中至少存入一张随机图片

3、

python scripts/script.py --predict

最终结果保存在./ablation中

可能出现的错
(1)RuntimeError: CUDNN_STATUS_EXECUTION_FAILED

解决:参考RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

二、训练自己的模型时
1、创建文件夹../final_dataset/trainA and ../final_dataset/trainB(即final_dataset文件夹与项目文件夹同级的位置),将图片(下载)分别放入

2、

nohup python -m visdom.server -port=8097

(可选步骤)打开浏览器,输入http://localhost:8097/(可以实时观看图片结果)

关于如何停止visdom.server步骤:
(1)ps -aux | grep visdom.server
(2)sudo kill 进程号(PID)

3、另开一个终端,激活虚拟环境enlighten后,输入:

python scripts/script.py --train

可能出现的错误:
(1)
在这里插入图片描述
解决:

  • 将./checkpoints/enlightening/opt.txt中的gpu_ids:[0,1,2]改为了gpu_ids:[0]
  • 将./scripts/script.py中的gpu_ids 0,1,2改为了gpu_ids 0

(2)
在这里插入图片描述解决:参考RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED 解决方案

  • 在train.py中加入
import torch
torch.backends.cudnn.enabled = False

(3)error:cuda runtime error (38) : no CUDA-capable device is detected at …\aten\src\

解决:

  • 输入nvidia-smi发现输出:NVIDIA-SMI has failed because it couldn’t communicate with the NVIDIA driver
  • 那么采用here的方法

(4)内存溢出
暂时解决:
将batchsize改小或者减小训练图片的尺寸

我的将batchsize改为了1,把训练图片的尺寸改成了256*256,
并且将base_dataset.py中的transform_list.append(transforms.RandomCrop(opt.finesize))改成了transform_list.append(transforms.RandomCrop(256))

训练过程:

------------ Options -------------
D_P_times2: False
IN_vgg: False
batchSize: 1
beta1: 0.5
checkpoints_dir: ./checkpoints
config: configs/unit_gta2city_folder.yaml
continue_train: False
dataroot: ../final_dataset
dataset_mode: unaligned
display_freq: 30
display_id: 1
display_port: 8097
display_single_pane_ncols: 0
display_winsize: 256
fcn: 0
fineSize: 320
gpu_ids: [0]
high_times: 400
hybrid_loss: True
identity: 0.0
input_linear: False
input_nc: 3
instance_norm: 0.0
isTrain: True
l1: 10.0
lambda_A: 10.0
lambda_B: 10.0
latent_norm: False
latent_threshold: False
lighten: False
linear: False
linear_add: False
loadSize: 286
low_times: 200
lr: 0.0001
max_dataset_size: inf
model: single
multiply: False
nThreads: 4
n_layers_D: 5
n_layers_patchD: 4
name: enlightening
ndf: 64
new_lr: False
ngf: 64
niter: 100
niter_decay: 100
no_dropout: True
no_flip: False
no_html: False
no_lsgan: False
no_vgg_instance: False
noise: 0
norm: instance
norm_attention: False
output_nc: 3
patchD: True
patchD_3: 5
patchSize: 32
patch_vgg: True
phase: train
pool_size: 50
print_freq: 100
resize_or_crop: crop
save_epoch_freq: 5
save_latest_freq: 5000
self_attention: True
serial_batches: False
skip: 1.0
syn_norm: False
tanh: False
times_residual: True
use_avgpool: 0
use_mse: False
use_norm: 1.0
use_ragan: True
use_wgan: 0.0
vary: 1
vgg: 1.0
vgg_choose: relu5_1
vgg_maxpooling: False
vgg_mean: False
which_direction: AtoB
which_epoch: latest
which_model_netD: no_norm_4
which_model_netG: sid_unet_resize
-------------- End ----------------
train.py:15: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  return yaml.load(stream)
CustomDatasetDataLoader
dataset [UnalignedDataset] was created
#training images = 1016
single
---------- Networks initialized -------------
DataParallel(
  (module): Unet_resize_conv(
    (conv1_1): Conv2d(4, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (downsample_1): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
    (downsample_2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
    (downsample_3): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
    (downsample_4): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
    (LReLU1_1): LeakyReLU(0.2, inplace)
    (bn1_1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True)
    (conv1_2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU1_2): LeakyReLU(0.2, inplace)
    (bn1_2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True)
    (max_pool1): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
    (conv2_1): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU2_1): LeakyReLU(0.2, inplace)
    (bn2_1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
    (conv2_2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU2_2): LeakyReLU(0.2, inplace)
    (bn2_2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
    (max_pool2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
    (conv3_1): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU3_1): LeakyReLU(0.2, inplace)
    (bn3_1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
    (conv3_2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU3_2): LeakyReLU(0.2, inplace)
    (bn3_2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
    (max_pool3): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
    (conv4_1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU4_1): LeakyReLU(0.2, inplace)
    (bn4_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
    (conv4_2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU4_2): LeakyReLU(0.2, inplace)
    (bn4_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
    (max_pool4): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
    (conv5_1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU5_1): LeakyReLU(0.2, inplace)
    (bn5_1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
    (conv5_2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU5_2): LeakyReLU(0.2, inplace)
    (bn5_2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
    (deconv5): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (conv6_1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU6_1): LeakyReLU(0.2, inplace)
    (bn6_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
    (conv6_2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU6_2): LeakyReLU(0.2, inplace)
    (bn6_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
    (deconv6): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (conv7_1): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU7_1): LeakyReLU(0.2, inplace)
    (bn7_1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
    (conv7_2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU7_2): LeakyReLU(0.2, inplace)
    (bn7_2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
    (deconv7): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (conv8_1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU8_1): LeakyReLU(0.2, inplace)
    (bn8_1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
    (conv8_2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU8_2): LeakyReLU(0.2, inplace)
    (bn8_2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
    (deconv8): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (conv9_1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU9_1): LeakyReLU(0.2, inplace)
    (bn9_1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True)
    (conv9_2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (LReLU9_2): LeakyReLU(0.2, inplace)
    (conv10): Conv2d(32, 3, kernel_size=(1, 1), stride=(1, 1))
  )
)
Total number of parameters: 8636675
DataParallel(
  (module): NoNormDiscriminator(
    (model): Sequential(
      (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
      (1): LeakyReLU(0.2, inplace)
      (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
      (3): LeakyReLU(0.2, inplace)
      (4): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
      (5): LeakyReLU(0.2, inplace)
      (6): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
      (7): LeakyReLU(0.2, inplace)
      (8): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
      (9): LeakyReLU(0.2, inplace)
      (10): Conv2d(512, 512, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
      (11): LeakyReLU(0.2, inplace)
      (12): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
    )
  )
)
Total number of parameters: 11154369
DataParallel(
  (module): NoNormDiscriminator(
    (model): Sequential(
      (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
      (1): LeakyReLU(0.2, inplace)
      (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
      (3): LeakyReLU(0.2, inplace)
      (4): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
      (5): LeakyReLU(0.2, inplace)
      (6): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
      (7): LeakyReLU(0.2, inplace)
      (8): Conv2d(512, 512, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
      (9): LeakyReLU(0.2, inplace)
      (10): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
    )
  )
)
Total number of parameters: 6959553
-----------------------------------------------
model [SingleGANModel] was created
Setting up a new session...
create web directory ./checkpoints/enlightening/web...
(epoch: 1, iters: 100, time: 0.246) D_A: 0.258 G_A: 2.298 vgg: 0.322 D_P: 0.142 
(epoch: 1, iters: 200, time: 0.229) D_A: 0.274 G_A: 2.480 vgg: 1.187 D_P: 0.003 
(epoch: 1, iters: 300, time: 0.440) D_A: 0.256 G_A: 2.032 vgg: 2.433 D_P: 0.012 
(epoch: 1, iters: 400, time: 0.230) D_A: 0.255 G_A: 2.074 vgg: 0.341 D_P: 0.019 
(epoch: 1, iters: 500, time: 0.230) D_A: 0.253 G_A: 2.108 vgg: 0.375 D_P: 0.031 
(epoch: 1, iters: 600, time: 0.468) D_A: 0.253 G_A: 2.227 vgg: 0.309 D_P: 0.106 
(epoch: 1, iters: 700, time: 0.268) D_A: 0.356 G_A: 1.333 vgg: 0.030 D_P: 0.140 
(epoch: 1, iters: 800, time: 0.231) D_A: 0.256 G_A: 2.215 vgg: 0.162 D_P: 0.045 
(epoch: 1, iters: 900, time: 0.541) D_A: 0.288 G_A: 1.288 vgg: 0.066 D_P: 0.230 
(epoch: 1, iters: 1000, time: 0.230) D_A: 0.311 G_A: 2.786 vgg: 0.459 D_P: 0.033 
End of epoch 1 / 200 	 Time Taken: 232 sec
(epoch: 2, iters: 84, time: 0.233) D_A: 0.278 G_A: 1.805 vgg: 0.169 D_P: 0.017 
(epoch: 2, iters: 184, time: 0.626) D_A: 0.565 G_A: 3.627 vgg: 0.277 D_P: 0.018 
(epoch: 2, iters: 284, time: 0.242) D_A: 0.263 G_A: 1.653 vgg: 0.044 D_P: 0.082 
(epoch: 2, iters: 384, time: 0.231) D_A: 0.266 G_A: 2.345 vgg: 0.179 D_P: 0.098 
(epoch: 2, iters: 484, time: 0.554) D_A: 0.253 G_A: 2.099 vgg: 0.106 D_P: 0.067 
(epoch: 2, iters: 584, time: 0.243) D_A: 0.433 G_A: 0.657 vgg: 0.055 D_P: 0.271 
(epoch: 2, iters: 684, time: 0.232) D_A: 0.253 G_A: 1.987 vgg: 0.156 D_P: 0.005 
(epoch: 2, iters: 784, time: 0.526) D_A: 0.260 G_A: 2.096 vgg: 0.498 D_P: 0.026 
(epoch: 2, iters: 884, time: 0.231) D_A: 0.256 G_A: 1.729 vgg: 0.233 D_P: 0.069 
(epoch: 2, iters: 984, time: 0.231) D_A: 0.360 G_A: 1.343 vgg: 0.236 D_P: 0.230 
End of epoch 2 / 200 	 Time Taken: 235 sec
(epoch: 3, iters: 68, time: 0.569) D_A: 0.341 G_A: 1.312 vgg: 0.265 D_P: 0.110 
(epoch: 3, iters: 168, time: 0.231) D_A: 0.410 G_A: 1.278 vgg: 0.502 D_P: 0.285 
(epoch: 3, iters: 268, time: 0.232) D_A: 0.291 G_A: 1.280 vgg: 0.367 D_P: 0.194 
(epoch: 3, iters: 368, time: 0.542) D_A: 0.261 G_A: 1.680 vgg: 0.580 D_P: 0.188 
(epoch: 3, iters: 468, time: 0.232) D_A: 0.466 G_A: 0.796 vgg: 0.366 D_P: 0.347 
(epoch: 3, iters: 568, time: 0.231) D_A: 0.301 G_A: 1.219 vgg: 0.707 D_P: 0.180 
(epoch: 3, iters: 668, time: 0.507) D_A: 0.264 G_A: 1.575 vgg: 0.659 D_P: 0.158 
(epoch: 3, iters: 768, time: 0.232) D_A: 0.311 G_A: 1.177 vgg: 0.626 D_P: 0.180 
(epoch: 3, iters: 868, time: 0.232) D_A: 0.379 G_A: 0.936 vgg: 0.489 D_P: 0.235 
(epoch: 3, iters: 968, time: 0.545) D_A: 0.362 G_A: 0.974 vgg: 0.382 D_P: 0.231 
End of epoch 3 / 200 	 Time Taken: 232 sec
(epoch: 4, iters: 52, time: 0.232) D_A: 0.308 G_A: 1.103 vgg: 0.312 D_P: 0.198 
(epoch: 4, iters: 152, time: 0.232) D_A: 0.399 G_A: 0.838 vgg: 0.419 D_P: 0.236 
(epoch: 4, iters: 252, time: 0.532) D_A: 0.432 G_A: 0.844 vgg: 0.295 D_P: 0.312 
(epoch: 4, iters: 352, time: 0.232) D_A: 0.414 G_A: 0.891 vgg: 0.387 D_P: 0.240 
(epoch: 4, iters: 452, time: 0.232) D_A: 0.300 G_A: 1.730 vgg: 0.434 D_P: 0.162 
(epoch: 4, iters: 552, time: 0.555) D_A: 0.382 G_A: 0.921 vgg: 0.478 D_P: 0.262 
(epoch: 4, iters: 652, time: 0.232) D_A: 0.495 G_A: 0.862 vgg: 1.041 D_P: 0.195 
(epoch: 4, iters: 752, time: 0.232) D_A: 0.531 G_A: 0.697 vgg: 0.580 D_P: 0.280 
(epoch: 4, iters: 852, time: 0.482) D_A: 0.470 G_A: 0.882 vgg: 1.253 D_P: 0.216 
(epoch: 4, iters: 952, time: 0.232) D_A: 0.295 G_A: 1.424 vgg: 0.744 D_P: 0.228 
End of epoch 4 / 200 	 Time Taken: 231 sec
(epoch: 5, iters: 36, time: 0.233) D_A: 0.416 G_A: 0.921 vgg: 0.364 D_P: 0.297 
(epoch: 5, iters: 136, time: 0.588) D_A: 0.306 G_A: 1.228 vgg: 0.373 D_P: 0.198 
(epoch: 5, iters: 236, time: 0.255) D_A: 0.466 G_A: 0.829 vgg: 0.773 D_P: 0.215 
(epoch: 5, iters: 336, time: 0.232) D_A: 0.487 G_A: 0.930 vgg: 0.457 D_P: 0.218 
(epoch: 5, iters: 436, time: 0.533) D_A: 0.518 G_A: 0.937 vgg: 0.788 D_P: 0.310 
(epoch: 5, iters: 536, time: 0.232) D_A: 0.324 G_A: 1.182 vgg: 0.883 D_P: 0.238 
(epoch: 5, iters: 636, time: 0.258) D_A: 0.308 G_A: 1.105 vgg: 0.685 D_P: 0.256 
(epoch: 5, iters: 736, time: 0.481) D_A: 0.392 G_A: 1.065 vgg: 0.744 D_P: 0.286 
(epoch: 5, iters: 836, time: 0.254) D_A: 0.261 G_A: 1.516 vgg: 0.347 D_P: 0.172 
(epoch: 5, iters: 936, time: 0.254) D_A: 0.385 G_A: 1.004 vgg: 0.554 D_P: 0.195 
saving the latest model (epoch 5, total_steps 5000)
saving the model at the end of epoch 5, iters 5080
End of epoch 5 / 200 	 Time Taken: 237 sec
(epoch: 6, iters: 20, time: 0.586) D_A: 0.405 G_A: 0.946 vgg: 0.237 D_P: 0.198 
(epoch: 6, iters: 120, time: 0.233) D_A: 0.419 G_A: 0.951 vgg: 0.557 D_P: 0.205 
(epoch: 6, iters: 220, time: 0.258) D_A: 0.295 G_A: 1.494 vgg: 0.559 D_P: 0.203 
(epoch: 6, iters: 320, time: 0.609) D_A: 0.288 G_A: 1.923 vgg: 0.313 D_P: 0.151 
(epoch: 6, iters: 420, time: 0.245) D_A: 0.397 G_A: 1.032 vgg: 0.362 D_P: 0.186 
(epoch: 6, iters: 520, time: 0.232) D_A: 0.407 G_A: 0.884 vgg: 0.357 D_P: 0.310 
(epoch: 6, iters: 620, time: 0.604) D_A: 0.403 G_A: 1.131 vgg: 0.800 D_P: 0.275 
(epoch: 6, iters: 720, time: 0.253) D_A: 0.329 G_A: 1.132 vgg: 0.264 D_P: 0.253 
(epoch: 6, iters: 820, time: 0.233) D_A: 0.434 G_A: 0.887 vgg: 0.567 D_P: 0.309 
(epoch: 6, iters: 920, time: 0.610) D_A: 0.441 G_A: 0.982 vgg: 0.683 D_P: 0.343 
End of epoch 6 / 200 	 Time Taken: 240 sec
(epoch: 7, iters: 4, time: 0.233) D_A: 0.270 G_A: 1.317 vgg: 0.706 D_P: 0.251 
(epoch: 7, iters: 104, time: 0.252) D_A: 0.291 G_A: 1.597 vgg: 1.216 D_P: 0.084 
(epoch: 7, iters: 204, time: 0.586) D_A: 0.476 G_A: 0.935 vgg: 0.739 D_P: 0.231 
(epoch: 7, iters: 304, time: 0.252) D_A: 0.279 G_A: 1.394 vgg: 0.228 D_P: 0.242 
(epoch: 7, iters: 404, time: 0.233) D_A: 0.266 G_A: 1.340 vgg: 0.329 D_P: 0.268 
(epoch: 7, iters: 504, time: 0.593) D_A: 0.264 G_A: 1.367 vgg: 0.650 D_P: 0.168 
(epoch: 7, iters: 604, time: 0.232) D_A: 0.280 G_A: 1.468 vgg: 0.348 D_P: 0.185 
(epoch: 7, iters: 704, time: 0.237) D_A: 0.301 G_A: 1.245 vgg: 0.673 D_P: 0.164 
(epoch: 7, iters: 804, time: 0.567) D_A: 0.367 G_A: 0.975 vgg: 0.580 D_P: 0.283 
(epoch: 7, iters: 904, time: 0.233) D_A: 0.582 G_A: 0.832 vgg: 0.197 D_P: 0.209 
(epoch: 7, iters: 1004, time: 0.233) D_A: 0.287 G_A: 1.100 vgg: 0.454 D_P: 0.268 
End of epoch 7 / 200 	 Time Taken: 241 sec
(epoch: 8, iters: 88, time: 0.558) D_A: 0.333 G_A: 1.092 vgg: 0.314 D_P: 0.190 
(epoch: 8, iters: 188, time: 0.234) D_A: 0.309 G_A: 1.228 vgg: 0.436 D_P: 0.276 
(epoch: 8, iters: 288, time: 0.233) D_A: 0.360 G_A: 1.077 vgg: 0.813 D_P: 0.276 
(epoch: 8, iters: 388, time: 0.545) D_A: 0.425 G_A: 0.831 vgg: 0.492 D_P: 0.284 
(epoch: 8, iters: 488, time: 0.232) D_A: 0.289 G_A: 1.173 vgg: 0.272 D_P: 0.216 
(epoch: 8, iters: 588, time: 0.232) D_A: 0.368 G_A: 0.985 vgg: 0.848 D_P: 0.323 
(epoch: 8, iters: 688, time: 0.557) D_A: 0.561 G_A: 0.900 vgg: 0.478 D_P: 0.325 
(epoch: 8, iters: 788, time: 0.232) D_A: 0.406 G_A: 0.977 vgg: 0.296 D_P: 0.270 
(epoch: 8, iters: 888, time: 0.231) D_A: 0.300 G_A: 1.209 vgg: 0.555 D_P: 0.243 
(epoch: 8, iters: 988, time: 0.505) D_A: 0.263 G_A: 1.673 vgg: 0.869 D_P: 0.220 
End of epoch 8 / 200 	 Time Taken: 232 sec
(epoch: 9, iters: 72, time: 0.232) D_A: 0.277 G_A: 1.701 vgg: 1.143 D_P: 0.161 
(epoch: 9, iters: 172, time: 0.232) D_A: 0.424 G_A: 0.826 vgg: 0.238 D_P: 0.307 
(epoch: 9, iters: 272, time: 0.616) D_A: 0.342 G_A: 1.165 vgg: 0.314 D_P: 0.232 
(epoch: 9, iters: 372, time: 0.233) D_A: 0.264 G_A: 1.450 vgg: 0.353 D_P: 0.208 
(epoch: 9, iters: 472, time: 0.232) D_A: 0.362 G_A: 2.365 vgg: 0.390 D_P: 0.176 
(epoch: 9, iters: 572, time: 0.543) D_A: 0.386 G_A: 1.025 vgg: 1.218 D_P: 0.313 
(epoch: 9, iters: 672, time: 0.232) D_A: 0.329 G_A: 1.057 vgg: 0.276 D_P: 0.260 
(epoch: 9, iters: 772, time: 0.231) D_A: 0.321 G_A: 1.256 vgg: 0.577 D_P: 0.227 
(epoch: 9, iters: 872, time: 0.589) D_A: 0.285 G_A: 1.255 vgg: 0.409 D_P: 0.221 
(epoch: 9, iters: 972, time: 0.232) D_A: 0.425 G_A: 2.426 vgg: 0.278 D_P: 0.140 
End of epoch 9 / 200 	 Time Taken: 232 sec
(epoch: 10, iters: 56, time: 0.231) D_A: 0.409 G_A: 1.026 vgg: 0.264 D_P: 0.210 
(epoch: 10, iters: 156, time: 0.568) D_A: 0.275 G_A: 1.892 vgg: 0.296 D_P: 0.146 
(epoch: 10, iters: 256, time: 0.232) D_A: 0.489 G_A: 0.856 vgg: 0.517 D_P: 0.226 
(epoch: 10, iters: 356, time: 0.232) D_A: 0.354 G_A: 1.096 vgg: 0.287 D_P: 0.188 
(epoch: 10, iters: 456, time: 0.549) D_A: 0.467 G_A: 0.866 vgg: 0.777 D_P: 0.207 
(epoch: 10, iters: 556, time: 0.231) D_A: 0.433 G_A: 0.907 vgg: 0.619 D_P: 0.259 
(epoch: 10, iters: 656, time: 0.232) D_A: 0.611 G_A: 0.831 vgg: 1.192 D_P: 0.303 
(epoch: 10, iters: 756, time: 0.542) D_A: 0.375 G_A: 2.163 vgg: 0.478 D_P: 0.196 
(epoch: 10, iters: 856, time: 0.233) D_A: 0.257 G_A: 1.670 vgg: 0.408 D_P: 0.117 
saving the latest model (epoch 10, total_steps 10000)
(epoch: 10, iters: 956, time: 0.231) D_A: 0.383 G_A: 0.887 vgg: 0.347 D_P: 0.274 
saving the model at the end of epoch 10, iters 10160
End of epoch 10 / 200 	 Time Taken: 233 sec
(epoch: 11, iters: 40, time: 0.595) D_A: 0.314 G_A: 1.312 vgg: 0.439 D_P: 0.313 
(epoch: 11, iters: 140, time: 0.232) D_A: 0.274 G_A: 1.407 vgg: 0.738 D_P: 0.231 
(epoch: 11, iters: 240, time: 0.232) D_A: 0.271 G_A: 1.747 vgg: 0.262 D_P: 0.128 
(epoch: 11, iters: 340, time: 0.573) D_A: 0.442 G_A: 0.941 vgg: 0.463 D_P: 0.243 
(epoch: 11, iters: 440, time: 0.232) D_A: 0.269 G_A: 1.889 vgg: 0.227 D_P: 0.095 
(epoch: 11, iters: 540, time: 0.232) D_A: 0.570 G_A: 0.800 vgg: 0.117 D_P: 0.229 
(epoch: 11, iters: 640, time: 0.591) D_A: 0.622 G_A: 0.699 vgg: 0.362 D_P: 0.347 
(epoch: 11, iters: 740, time: 0.232) D_A: 0.285 G_A: 1.311 vgg: 0.244 D_P: 0.240 
(epoch: 11, iters: 840, time: 0.232) D_A: 0.275 G_A: 1.744 vgg: 0.991 D_P: 0.147 
(epoch: 11, iters: 940, time: 0.523) D_A: 0.260 G_A: 1.512 vgg: 0.432 D_P: 0.106 
End of epoch 11 / 200 	 Time Taken: 232 sec
(epoch: 12, iters: 24, time: 0.232) D_A: 0.303 G_A: 1.147 vgg: 0.781 D_P: 0.217 
(epoch: 12, iters: 124, time: 0.232) D_A: 0.323 G_A: 1.295 vgg: 0.957 D_P: 0.161 
(epoch: 12, iters: 224, time: 0.563) D_A: 0.269 G_A: 1.727 vgg: 0.516 D_P: 0.143 
(epoch: 12, iters: 324, time: 0.232) D_A: 0.376 G_A: 0.836 vgg: 0.362 D_P: 0.363 
(epoch: 12, iters: 424, time: 0.232) D_A: 0.290 G_A: 1.264 vgg: 0.276 D_P: 0.198 
(epoch: 12, iters: 524, time: 0.566) D_A: 0.343 G_A: 1.127 vgg: 0.163 D_P: 0.187 
(epoch: 12, iters: 624, time: 0.233) D_A: 0.457 G_A: 1.197 vgg: 0.475 D_P: 0.188 
(epoch: 12, iters: 724, time: 0.233) D_A: 0.494 G_A: 0.732 vgg: 0.346 D_P: 0.399 
(epoch: 12, iters: 824, time: 0.539) D_A: 0.278 G_A: 1.411 vgg: 0.802 D_P: 0.174 
(epoch: 12, iters: 924, time: 0.233) D_A: 0.291 G_A: 1.594 vgg: 0.527 D_P: 0.216 
End of epoch 12 / 200 	 Time Taken: 232 sec
(epoch: 13, iters: 8, time: 0.232) D_A: 0.577 G_A: 0.772 vgg: 0.346 D_P: 0.362 
(epoch: 13, iters: 108, time: 0.537) D_A: 0.493 G_A: 0.939 vgg: 0.213 D_P: 0.229 
(epoch: 13, iters: 208, time: 0.232) D_A: 0.317 G_A: 1.164 vgg: 0.257 D_P: 0.254 
(epoch: 13, iters: 308, time: 0.231) D_A: 0.289 G_A: 1.303 vgg: 0.900 D_P: 0.188 
(epoch: 13, iters: 408, time: 0.521) D_A: 0.279 G_A: 1.379 vgg: 0.422 D_P: 0.242 
(epoch: 13, iters: 508, time: 0.231) D_A: 0.506 G_A: 0.775 vgg: 0.419 D_P: 0.296 
(epoch: 13, iters: 608, time: 0.232) D_A: 0.283 G_A: 1.551 vgg: 0.362 D_P: 0.155 
(epoch: 13, iters: 708, time: 0.598) D_A: 0.418 G_A: 0.770 vgg: 0.300 D_P: 0.282 
(epoch: 13, iters: 808, time: 0.231) D_A: 0.268 G_A: 1.754 vgg: 0.171 D_P: 0.131 
(epoch: 13, iters: 908, time: 0.232) D_A: 0.444 G_A: 1.140 vgg: 0.308 D_P: 0.208 
(epoch: 13, iters: 1008, time: 0.560) D_A: 0.336 G_A: 1.253 vgg: 0.398 D_P: 0.219 
End of epoch 13 / 200 	 Time Taken: 232 sec
(epoch: 14, iters: 92, time: 0.232) D_A: 0.264 G_A: 1.551 vgg: 0.405 D_P: 0.157 
(epoch: 14, iters: 192, time: 0.231) D_A: 0.333 G_A: 1.027 vgg: 0.390 D_P: 0.284 
(epoch: 14, iters: 292, time: 0.509) D_A: 0.352 G_A: 1.147 vgg: 0.872 D_P: 0.167 
(epoch: 14, iters: 392, time: 0.232) D_A: 0.347 G_A: 2.184 vgg: 0.298 D_P: 0.134 
(epoch: 14, iters: 492, time: 0.232) D_A: 0.304 G_A: 1.224 vgg: 0.318 D_P: 0.279 
(epoch: 14, iters: 592, time: 0.548) D_A: 0.347 G_A: 1.229 vgg: 0.917 D_P: 0.237 
(epoch: 14, iters: 692, time: 0.233) D_A: 0.412 G_A: 0.949 vgg: 0.507 D_P: 0.255 
(epoch: 14, iters: 792, time: 0.232) D_A: 0.263 G_A: 1.655 vgg: 0.298 D_P: 0.173 
(epoch: 14, iters: 892, time: 0.506) D_A: 0.332 G_A: 1.103 vgg: 0.284 D_P: 0.324 
(epoch: 14, iters: 992, time: 0.231) D_A: 0.265 G_A: 1.392 vgg: 0.282 D_P: 0.136 
End of epoch 14 / 200 	 Time Taken: 232 sec
(epoch: 15, iters: 76, time: 0.232) D_A: 0.387 G_A: 0.996 vgg: 0.288 D_P: 0.342 
(epoch: 15, iters: 176, time: 0.604) D_A: 0.336 G_A: 1.306 vgg: 0.477 D_P: 0.166 
(epoch: 15, iters: 276, time: 0.232) D_A: 0.266 G_A: 1.625 vgg: 0.367 D_P: 0.190 
(epoch: 15, iters: 376, time: 0.232) D_A: 0.352 G_A: 1.058 vgg: 0.573 D_P: 0.249 
(epoch: 15, iters: 476, time: 0.542) D_A: 0.348 G_A: 1.177 vgg: 0.305 D_P: 0.272 
(epoch: 15, iters: 576, time: 0.232) D_A: 0.515 G_A: 0.868 vgg: 0.276 D_P: 0.286 
(epoch: 15, iters: 676, time: 0.232) D_A: 0.419 G_A: 0.940 vgg: 0.347 D_P: 0.284 
(epoch: 15, iters: 776, time: 0.547) D_A: 0.268 G_A: 1.471 vgg: 0.559 D_P: 0.235 
saving the latest model (epoch 15, total_steps 15000)
(epoch: 15, iters: 876, time: 0.231) D_A: 0.272 G_A: 1.500 vgg: 0.584 D_P: 0.209 
(epoch: 15, iters: 976, time: 0.232) D_A: 0.315 G_A: 1.212 vgg: 0.264 D_P: 0.183 
saving the model at the end of epoch 15, iters 15240
End of epoch 15 / 200 	 Time Taken: 234 sec
(epoch: 16, iters: 60, time: 0.627) D_A: 0.440 G_A: 0.996 vgg: 0.170 D_P: 0.459 
(epoch: 16, iters: 160, time: 0.232) D_A: 0.344 G_A: 1.272 vgg: 0.600 D_P: 0.228 
(epoch: 16, iters: 260, time: 0.232) D_A: 0.269 G_A: 1.894 vgg: 0.186 D_P: 0.084 
(epoch: 16, iters: 360, time: 0.557) D_A: 0.390 G_A: 1.206 vgg: 0.224 D_P: 0.151 
(epoch: 16, iters: 460, time: 0.231) D_A: 0.360 G_A: 1.114 vgg: 0.456 D_P: 0.184 
(epoch: 16, iters: 560, time: 0.232) D_A: 0.372 G_A: 1.087 vgg: 0.428 D_P: 0.231 
(epoch: 16, iters: 660, time: 0.548) D_A: 0.270 G_A: 1.908 vgg: 0.395 D_P: 0.145 
(epoch: 16, iters: 760, time: 0.233) D_A: 0.268 G_A: 1.433 vgg: 0.386 D_P: 0.249 
(epoch: 16, iters: 860, time: 0.231) D_A: 0.397 G_A: 0.827 vgg: 0.228 D_P: 0.298 
(epoch: 16, iters: 960, time: 0.520) D_A: 0.343 G_A: 1.226 vgg: 1.084 D_P: 0.179 
End of epoch 16 / 200 	 Time Taken: 231 sec
(epoch: 17, iters: 44, time: 0.232) D_A: 0.286 G_A: 1.942 vgg: 0.800 D_P: 0.214 
(epoch: 17, iters: 144, time: 0.233) D_A: 0.289 G_A: 1.856 vgg: 0.884 D_P: 0.237 
(epoch: 17, iters: 244, time: 0.488) D_A: 0.259 G_A: 1.474 vgg: 0.348 D_P: 0.258 
(epoch: 17, iters: 344, time: 0.233) D_A: 0.485 G_A: 0.911 vgg: 0.453 D_P: 0.302 
(epoch: 17, iters: 444, time: 0.232) D_A: 0.282 G_A: 1.734 vgg: 0.662 D_P: 0.228 
(epoch: 17, iters: 544, time: 0.527) D_A: 0.603 G_A: 0.683 vgg: 0.391 D_P: 0.264 
(epoch: 17, iters: 644, time: 0.232) D_A: 0.403 G_A: 1.075 vgg: 0.574 D_P: 0.328 
(epoch: 17, iters: 744, time: 0.233) D_A: 0.292 G_A: 2.071 vgg: 0.592 D_P: 0.120 
(epoch: 17, iters: 844, time: 0.591) D_A: 0.322 G_A: 1.314 vgg: 0.244 D_P: 0.162 
(epoch: 17, iters: 944, time: 0.232) D_A: 0.397 G_A: 1.041 vgg: 0.637 D_P: 0.323 
End of epoch 17 / 200 	 Time Taken: 232 sec
(epoch: 18, iters: 28, time: 0.232) D_A: 0.322 G_A: 1.360 vgg: 0.721 D_P: 0.178 
(epoch: 18, iters: 128, time: 0.534) D_A: 0.268 G_A: 1.734 vgg: 0.483 D_P: 0.194 
(epoch: 18, iters: 228, time: 0.232) D_A: 0.367 G_A: 1.084 vgg: 0.666 D_P: 0.285 
(epoch: 18, iters: 328, time: 0.233) D_A: 0.269 G_A: 1.625 vgg: 0.283 D_P: 0.154 
(epoch: 18, iters: 428, time: 0.590) D_A: 0.335 G_A: 1.049 vgg: 0.483 D_P: 0.191 
(epoch: 18, iters: 528, time: 0.232) D_A: 0.402 G_A: 0.894 vgg: 0.334 D_P: 0.325 
(epoch: 18, iters: 628, time: 0.232) D_A: 0.264 G_A: 1.558 vgg: 0.731 D_P: 0.123 
(epoch: 18, iters: 728, time: 0.519) D_A: 0.257 G_A: 1.560 vgg: 0.291 D_P: 0.152 
(epoch: 18, iters: 828, time: 0.232) D_A: 0.430 G_A: 2.457 vgg: 0.402 D_P: 0.185 
(epoch: 18, iters: 928, time: 0.232) D_A: 0.276 G_A: 1.340 vgg: 0.612 D_P: 0.331 
End of epoch 18 / 200 	 Time Taken: 232 sec
(epoch: 19, iters: 12, time: 0.560) D_A: 0.639 G_A: 0.742 vgg: 0.458 D_P: 0.396 
(epoch: 19, iters: 112, time: 0.232) D_A: 0.270 G_A: 1.870 vgg: 0.398 D_P: 0.149 
(epoch: 19, iters: 212, time: 0.232) D_A: 0.333 G_A: 1.108 vgg: 0.292 D_P: 0.226 
(epoch: 19, iters: 312, time: 0.585) D_A: 0.269 G_A: 1.502 vgg: 0.235 D_P: 0.262 
(epoch: 19, iters: 412, time: 0.232) D_A: 0.327 G_A: 1.149 vgg: 0.391 D_P: 0.287 
(epoch: 19, iters: 512, time: 0.233) D_A: 0.345 G_A: 1.231 vgg: 1.225 D_P: 0.289 
(epoch: 19, iters: 612, time: 0.555) D_A: 0.451 G_A: 1.015 vgg: 0.438 D_P: 0.315 
(epoch: 19, iters: 712, time: 0.231) D_A: 0.279 G_A: 2.000 vgg: 0.472 D_P: 0.077 
(epoch: 19, iters: 812, time: 0.232) D_A: 0.269 G_A: 1.683 vgg: 0.890 D_P: 0.127 
(epoch: 19, iters: 912, time: 0.547) D_A: 0.497 G_A: 0.726 vgg: 0.640 D_P: 0.373 
(epoch: 19, iters: 1012, time: 0.232) D_A: 0.428 G_A: 0.971 vgg: 0.712 D_P: 0.356 
End of epoch 19 / 200 	 Time Taken: 232 sec
(epoch: 20, iters: 96, time: 0.231) D_A: 0.274 G_A: 1.722 vgg: 0.323 D_P: 0.151 
(epoch: 20, iters: 196, time: 0.630) D_A: 0.272 G_A: 1.787 vgg: 0.627 D_P: 0.139 
(epoch: 20, iters: 296, time: 0.231) D_A: 0.345 G_A: 2.372 vgg: 0.380 D_P: 0.077 
(epoch: 20, iters: 396, time: 0.232) D_A: 0.376 G_A: 1.149 vgg: 0.320 D_P: 0.236 
(epoch: 20, iters: 496, time: 0.555) D_A: 0.270 G_A: 1.760 vgg: 0.702 D_P: 0.151 
(epoch: 20, iters: 596, time: 0.232) D_A: 0.263 G_A: 1.617 vgg: 1.277 D_P: 0.307 
(epoch: 20, iters: 696, time: 0.232) D_A: 0.261 G_A: 1.529 vgg: 0.735 D_P: 0.128 
saving the latest model (epoch 20, total_steps 20000)
(epoch: 20, iters: 796, time: 0.587) D_A: 0.282 G_A: 1.410 vgg: 0.289 D_P: 0.217 
(epoch: 20, iters: 896, time: 0.232) D_A: 0.293 G_A: 1.975 vgg: 0.507 D_P: 0.164 
(epoch: 20, iters: 996, time: 0.232) D_A: 0.284 G_A: 1.413 vgg: 0.326 D_P: 0.204 
saving the model at the end of epoch 20, iters 20320
End of epoch 20 / 200 	 Time Taken: 234 sec
(epoch: 21, iters: 80, time: 0.588) D_A: 0.338 G_A: 1.192 vgg: 0.417 D_P: 0.231 
(epoch: 21, iters: 180, time: 0.231) D_A: 0.261 G_A: 1.731 vgg: 0.371 D_P: 0.195 
(epoch: 21, iters: 280, time: 0.231) D_A: 0.323 G_A: 1.183 vgg: 0.384 D_P: 0.216 
(epoch: 21, iters: 380, time: 0.499) D_A: 0.335 G_A: 2.280 vgg: 0.528 D_P: 0.150 
(epoch: 21, iters: 480, time: 0.232) D_A: 0.269 G_A: 1.363 vgg: 0.236 D_P: 0.220 
(epoch: 21, iters: 580, time: 0.232) D_A: 0.379 G_A: 1.144 vgg: 0.522 D_P: 0.170 
(epoch: 21, iters: 680, time: 0.538) D_A: 0.393 G_A: 1.118 vgg: 0.419 D_P: 0.241 
(epoch: 21, iters: 780, time: 0.232) D_A: 0.308 G_A: 1.841 vgg: 0.309 D_P: 0.212 
(epoch: 21, iters: 8
评论 17
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值