GAN-1

深度学习笔记(十)

Posted by Nino Lau on April 27, 2019

实验要求与基本流程

实验要求

  1. 结合理论课内容,深入理解GAN(Generative Adversarial Networks,生成对抗网络)的原理与训练过程.了解GAN网络结构的演变过程与几个基本的GAN的原理(如DCGAN,wGAN等.)
  2. 阅读实验指导书的实验内容,按照提示运行以及补充实验代码,或者简要回答问题.提交作业时,保留实验结果.

实验流程

  • GAN的网络结构与训练
  • DCGAN
  • LSGAN
  • WGAN
  • WGAN-GP

GAN(Generative Adversarial Networks)

让我们先来看一个只是用线性层的生成对抗网络(GAN),来简单了解一下GAN的基本网络结构与训练过程.

这个GAN网络结构分为两部分,生成器网络Generator和判别器网络Discriminator.

  • 生成器Generator将随机生成的噪声z通过多个线性层生成图片,注意生成器的最后一层是Tanh,所以我们生成的图片的取值范围为[-1,1],同理,我们会将真实图片归一化(normalize)到[-1,1].
  • 而判别器Discriminator是一个二分类器,通过多个线性层得到一个概率值来判别图片是”真实”或者是”生成”的,所以在Discriminator的最后是一个sigmoid,来得到图片是”真实”的概率.

在所有的网络结构中我们都使用了LeakyReLU作为激活函数,除了G与D的最后一层,同时,我们在层与层之间我们还加入了BatchNormalization.

import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
from utils import show
%matplotlib inline
class Generator(nn.Module):
    def __init__(self, image_size=32, latent_dim=100, output_channel=1):
        """
        image_size: image with and height
        latent dim: the dimension of random noise z
        output_channel: the channel of generated image, for example, 1 for gray image, 3 for RGB image
        """
        super(Generator, self).__init__()
        self.latent_dim = latent_dim
        self.output_channel = output_channel
        self.image_size = image_size
        
        # Linear layer: latent_dim -> 128 -> 256 -> 512 -> 1024 -> output_channel * image_size * image_size -> Tanh
        self.model = nn.Sequential(
            nn.Linear(latent_dim, 128),
            nn.BatchNorm1d(128),
            nn.LeakyReLU(0.2, inplace=True),
            nn.Linear(128, 256),
            nn.BatchNorm1d(256),
            nn.LeakyReLU(0.2, inplace=True),
            nn.Linear(256, 512),
            nn.BatchNorm1d(512),
            nn.LeakyReLU(0.2, inplace=True),
            nn.Linear(512, 1024),
            nn.BatchNorm1d(1024),
            nn.LeakyReLU(0.2, inplace=True),
            
            nn.Linear(1024, output_channel * image_size * image_size),
            nn.Tanh()
        )

    def forward(self, z):
        img = self.model(z)
        img = img.view(img.size(0), self.output_channel, self.image_size, self.image_size)
        return img


class Discriminator(nn.Module):
    def __init__(self, image_size=32, input_channel=1):
        """
        image_size: image with and height
        input_channel: the channel of input image, for example, 1 for gray image, 3 for RGB image
        """
        super(Discriminator, self).__init__()
        self.image_size = image_size
        self.input_channel = input_channel
        
        # Linear layer: input_channel * image_size * image_size -> 1024 -> 512 -> 256 -> 1 -> Sigmoid
        self.model = nn.Sequential(
            nn.Linear(input_channel * image_size * image_size, 1024),
            nn.LeakyReLU(0.2, inplace=True),
            nn.Linear(1024, 512),
            nn.LeakyReLU(0.2, inplace=True),
            nn.Linear(512, 256),
            nn.LeakyReLU(0.2, inplace=True),
            nn.Linear(256, 1),
            nn.Sigmoid(),
        )

    def forward(self, img):
        img_flat = img.view(img.size(0), -1)
        out = self.model(img_flat)
        return out

数据集

在训练我们的GAN网络之前, 先介绍一下本次实验可训练GAN的数据集,我们提供了两个数据集来供大家进行尝试数据集.

  • MNIST手写体3类数据集,这里为了加快我们的训练速度,我们提供了一个简化版本的只包含数字0,2的2类MNIST数据集,每类各1000张.图片为28*28的单通道灰度图(我们将其resize到32*32),对于GAN而言,我们不需要测试集.我们本次实验主要使用该数据集作为主要的训练数据集.

  • 室内家具数据集.为了加快我们的训练速度,我们将其做了删减处理,仅包含chair等一个类,共500张.图片为32*32的3通道彩色图片.

下面是两个加载数据集的函数.注意我们将所有图片normalize到了[-1,1]之间.

def load_mnist_data():
    """
    load mnist(0,1,2) dataset 
    """
    
    transform = torchvision.transforms.Compose([
        # transform to 1-channel gray image since we reading image in RGB mode
        transforms.Grayscale(1),
        # resize image from 28 * 28 to 32 * 32
        transforms.Resize(32),
        transforms.ToTensor(),
        # normalize with mean=0.5 std=0.5
        transforms.Normalize(mean=(0.5, ), 
                             std=(0.5, ))
        ])
    
    train_dataset = torchvision.datasets.ImageFolder(root='./data/mnist', transform=transform)
    
    return train_dataset

def load_furniture_data():
    """
    load furniture dataset 
    """
    transform = torchvision.transforms.Compose([
        transforms.ToTensor(),
        # normalize with mean=0.5 std=0.5
        transforms.Normalize(mean=(0.5, 0.5, 0.5), 
                             std=(0.5, 0.5, 0.5))
        ])
    train_dataset = torchvision.datasets.ImageFolder(root='./data/household_furniture', transform=transform)
    return train_dataset

(无需阅读理解)运行下面2个cell的代码来查看两个数据集中的20张随机真实图片.

def denorm(x):
    # denormalize
    out = (x + 1) / 2
    return out.clamp(0, 1)
from utils import show
"""
you can pass code in this cell
"""
# show mnist real data
train_dataset = load_mnist_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=20, shuffle=True)
show(torchvision.utils.make_grid(denorm(next(iter(trainloader))[0]), nrow=5))
# show furniture real data
train_dataset = load_furniture_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=20, shuffle=True)
show(torchvision.utils.make_grid(denorm(next(iter(trainloader))[0]), nrow=5))

下面代码实现GAN在一个epoch内的训练过程.

大体而言,GAN的训练过程分为两步,首先将随机噪声z喂给G,生成图片,然后将真实图片和G生成的图片喂给D,然后使用对应的loss函数反向传播优化D.然后再次使用G生成图片,并喂给D,并使用对应的loss函数反向传播优化G.

下面的图片是普通的GAN在G和D上的优化目标: 值得注意的是,上述图片描述的是G和D的优化目标,而在具体实现过程中,我们实现loss函数来达到优化目标.对于上图中D与G的优化目标我们可以使用Binary Cross Entroy损失函数来实现: $p_i$, $y_i$分别是模型的预测值与图片的真实标签(1为真,0为假).因此,对于D,最大化其优化目标可以通过最小化一个BCEloss来实现,其真实图片$x\sim{P_r}$的标签设置为1,而生成图片$z\sim{P(z)}$的标签设置为0.我们可以看到这样的损失函数相当于对D的优化目标加上负号.

而对于G,也通过最小化一个BCEloss来实现,即将生成图片$z\sim{P(z)}$的标签设置为1即可,我们可以看到这样的损失函数与其优化目标是一致的.

def train(trainloader, G, D, G_optimizer, D_optimizer, loss_func, device, z_dim):
    """
    train a GAN with model G and D in one epoch
    Args:
        trainloader: data loader to train
        G: model Generator
        D: model Discriminator
        G_optimizer: optimizer of G(etc. Adam, SGD)
        D_optimizer: optimizer of D(etc. Adam, SGD)
        loss_func: loss function to train G and D. For example, Binary Cross Entropy(BCE) loss function
        device: cpu or cuda device
        z_dim: the dimension of random noise z
    """
    # set train mode
    D.train()
    G.train()
    
    D_total_loss = 0
    G_total_loss = 0
    
    
    for i, (x, _) in enumerate(trainloader):
        # real label and fake label
        y_real = torch.ones(x.size(0), 1).to(device)
        y_fake = torch.zeros(x.size(0), 1).to(device)
        
        x = x.to(device)
        z = torch.rand(x.size(0), z_dim).to(device)

        # update D network
        # D optimizer zero grads
        D_optimizer.zero_grad()
        
        # D real loss from real images
        d_real = D(x)
        d_real_loss = loss_func(d_real, y_real)
        
        # D fake loss from fake images generated by G
        g_z = G(z)
        d_fake = D(g_z)
        d_fake_loss = loss_func(d_fake, y_fake)
        
        # D backward and step
        d_loss = d_real_loss + d_fake_loss
        d_loss.backward()
        D_optimizer.step()

        # update G network
        # G optimizer zero grads
        G_optimizer.zero_grad()
        
        # G loss
        g_z = G(z)
        d_fake = D(g_z)
        g_loss = loss_func(d_fake, y_real)
        
        # G backward and step
        g_loss.backward()
        G_optimizer.step()
        
        D_total_loss += d_loss.item()
        G_total_loss += g_loss.item()
    
    return D_total_loss / len(trainloader), G_total_loss / len(trainloader)

当模型训练后,我们需要查看此时G生成的图片效果,下面的visualize_results代码便实现了这块内容.注意,我们生成的图片都在[-1,1],因此,我们需要将图片反向归一化(denorm)到[0,1].

def visualize_results(G, device, z_dim, result_size=20):
    G.eval()
    
    z = torch.rand(result_size, z_dim).to(device)
    g_z = G(z)
    
    show(torchvision.utils.make_grid(denorm(g_z.detach().cpu()), nrow=5))

万事具备,接下来让我们来尝试这训练一个基本的GAN网络吧.这里实现run_gan函数来调用train以及visualize_results来训练我们的GAN.

def run_gan(trainloader, G, D, G_optimizer, D_optimizer, loss_func, n_epochs, device, latent_dim):
    d_loss_hist = []
    g_loss_hist = []

    for epoch in range(n_epochs):
        d_loss, g_loss = train(trainloader, G, D, G_optimizer, D_optimizer, loss_func, device, 
                               z_dim=latent_dim)
        print('Epoch {}: Train D loss: {:.4f}, G loss: {:.4f}'.format(epoch, d_loss, g_loss))

        d_loss_hist.append(d_loss)
        g_loss_hist.append(g_loss)

        if epoch == 0 or (epoch + 1) % 10 == 0:
            visualize_results(G, device, latent_dim) 
    
    return d_loss_hist, g_loss_hist

设置好超参数就可以开始训练!让我们尝试用它来训练2类的mnist数据集

# hyper params

# z dim
latent_dim = 100

# image size and channel
image_size=32
image_channel=1

# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)

# epochs and batch size
n_epochs = 100
batch_size = 32

# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:0')

# mnist dataset and dataloader
train_dataset = load_mnist_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

# use BCELoss as loss function
bceloss = nn.BCELoss().to(device)

# G and D model
G = Generator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = Discriminator(image_size=image_size, input_channel=image_channel).to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss, 
                                   n_epochs, device, latent_dim)
    
Epoch 0: Train D loss: 1.1649, G loss: 0.7650

Epoch 1: Train D loss: 1.2279, G loss: 0.9637
Epoch 2: Train D loss: 1.2282, G loss: 1.0666
Epoch 3: Train D loss: 1.1667, G loss: 1.1114
Epoch 4: Train D loss: 1.1309, G loss: 1.1353
Epoch 5: Train D loss: 1.0876, G loss: 1.1348
Epoch 6: Train D loss: 1.0184, G loss: 1.2990
Epoch 7: Train D loss: 1.1157, G loss: 1.2255
Epoch 8: Train D loss: 1.0693, G loss: 1.2365
Epoch 9: Train D loss: 1.0913, G loss: 1.2913

Epoch 10: Train D loss: 1.0821, G loss: 1.3000
Epoch 11: Train D loss: 1.0513, G loss: 1.3288
Epoch 12: Train D loss: 1.0736, G loss: 1.3025
Epoch 13: Train D loss: 1.0847, G loss: 1.3232
Epoch 14: Train D loss: 1.0719, G loss: 1.2890
Epoch 15: Train D loss: 1.0308, G loss: 1.3940
Epoch 16: Train D loss: 1.0674, G loss: 1.3403
Epoch 17: Train D loss: 1.0661, G loss: 1.3169
Epoch 18: Train D loss: 1.0768, G loss: 1.3372
Epoch 19: Train D loss: 1.0884, G loss: 1.2901

Epoch 20: Train D loss: 1.0906, G loss: 1.3051
Epoch 21: Train D loss: 1.0893, G loss: 1.2822
Epoch 22: Train D loss: 1.0954, G loss: 1.2928
Epoch 23: Train D loss: 1.1046, G loss: 1.2994
Epoch 24: Train D loss: 1.1148, G loss: 1.2459
Epoch 25: Train D loss: 1.1094, G loss: 1.2694
Epoch 26: Train D loss: 1.0933, G loss: 1.3336
Epoch 27: Train D loss: 1.0758, G loss: 1.3245
Epoch 28: Train D loss: 1.0982, G loss: 1.3037
Epoch 29: Train D loss: 1.0962, G loss: 1.3157

Epoch 30: Train D loss: 1.0964, G loss: 1.3104
Epoch 31: Train D loss: 1.0724, G loss: 1.3115
Epoch 32: Train D loss: 1.0865, G loss: 1.3063
Epoch 33: Train D loss: 1.1004, G loss: 1.2856
Epoch 34: Train D loss: 1.1080, G loss: 1.2800
Epoch 35: Train D loss: 1.1101, G loss: 1.2834
Epoch 36: Train D loss: 1.1025, G loss: 1.2795
Epoch 37: Train D loss: 1.1234, G loss: 1.2600
Epoch 38: Train D loss: 1.1017, G loss: 1.2708
Epoch 39: Train D loss: 1.1237, G loss: 1.2449

Epoch 40: Train D loss: 1.1230, G loss: 1.2525
Epoch 41: Train D loss: 1.1228, G loss: 1.2605
Epoch 42: Train D loss: 1.1238, G loss: 1.2594
Epoch 43: Train D loss: 1.1149, G loss: 1.2411
Epoch 44: Train D loss: 1.1135, G loss: 1.2392
Epoch 45: Train D loss: 1.1308, G loss: 1.2189
Epoch 46: Train D loss: 1.1072, G loss: 1.2685
Epoch 47: Train D loss: 1.1002, G loss: 1.3011
Epoch 48: Train D loss: 1.1174, G loss: 1.2395
Epoch 49: Train D loss: 1.1197, G loss: 1.2757

Epoch 50: Train D loss: 1.1245, G loss: 1.2193
Epoch 51: Train D loss: 1.1292, G loss: 1.2130
Epoch 52: Train D loss: 1.1253, G loss: 1.2375
Epoch 53: Train D loss: 1.1124, G loss: 1.2234
Epoch 54: Train D loss: 1.1205, G loss: 1.2609
Epoch 55: Train D loss: 1.1471, G loss: 1.2197
Epoch 56: Train D loss: 1.1124, G loss: 1.2615
Epoch 57: Train D loss: 1.1222, G loss: 1.2320
Epoch 58: Train D loss: 1.1095, G loss: 1.2440
Epoch 59: Train D loss: 1.1336, G loss: 1.2440

Epoch 60: Train D loss: 1.1181, G loss: 1.2432
Epoch 61: Train D loss: 1.1277, G loss: 1.2391
Epoch 62: Train D loss: 1.1294, G loss: 1.2403
Epoch 63: Train D loss: 1.1377, G loss: 1.2410
Epoch 64: Train D loss: 1.1265, G loss: 1.2175
Epoch 65: Train D loss: 1.1265, G loss: 1.2886
Epoch 66: Train D loss: 1.1158, G loss: 1.2575
Epoch 67: Train D loss: 1.1066, G loss: 1.2554
Epoch 68: Train D loss: 1.1065, G loss: 1.2688
Epoch 69: Train D loss: 1.1181, G loss: 1.2756

Epoch 70: Train D loss: 1.1167, G loss: 1.2600
Epoch 71: Train D loss: 1.1186, G loss: 1.2660
Epoch 72: Train D loss: 1.1022, G loss: 1.2948
Epoch 73: Train D loss: 1.1089, G loss: 1.2779
Epoch 74: Train D loss: 1.1019, G loss: 1.3040
Epoch 75: Train D loss: 1.1211, G loss: 1.2732
Epoch 76: Train D loss: 1.0942, G loss: 1.3277
Epoch 77: Train D loss: 1.1080, G loss: 1.2794
Epoch 78: Train D loss: 1.0737, G loss: 1.3311
Epoch 79: Train D loss: 1.0908, G loss: 1.3429

Epoch 80: Train D loss: 1.0965, G loss: 1.3002
Epoch 81: Train D loss: 1.0829, G loss: 1.3260
Epoch 82: Train D loss: 1.0804, G loss: 1.3612
Epoch 83: Train D loss: 1.0843, G loss: 1.3444
Epoch 84: Train D loss: 1.0655, G loss: 1.3481
Epoch 85: Train D loss: 1.0808, G loss: 1.3640
Epoch 86: Train D loss: 1.0882, G loss: 1.3536
Epoch 87: Train D loss: 1.0728, G loss: 1.3587
Epoch 88: Train D loss: 1.0187, G loss: 1.4288
Epoch 89: Train D loss: 1.0442, G loss: 1.3970

Epoch 90: Train D loss: 1.0577, G loss: 1.3969
Epoch 91: Train D loss: 1.0337, G loss: 1.4507
Epoch 92: Train D loss: 1.0654, G loss: 1.4042
Epoch 93: Train D loss: 1.0480, G loss: 1.4118
Epoch 94: Train D loss: 1.0387, G loss: 1.3993
Epoch 95: Train D loss: 1.0224, G loss: 1.4461
Epoch 96: Train D loss: 1.0234, G loss: 1.4688
Epoch 97: Train D loss: 1.0574, G loss: 1.4508
Epoch 98: Train D loss: 1.0303, G loss: 1.4741
Epoch 99: Train D loss: 1.0033, G loss: 1.4639

训练完后,让我们来看一下G生成的图片效果,可以看到即使是一个简单的GAN在这种简单的数据集上的生成效果还是不错的,虽然仍然存在不少瑕疵,比如说我们可以看到生成的图片上的数字有很多奇怪的雪花等等.

让我们看一下G和D的loss变化曲线(运行下方语句.)

from utils import loss_plot
loss_plot(d_loss_hist, g_loss_hist)

作业:

观察G与D的loss曲线,与之前的训练的CNN的loss曲线相比,有什么不同?试简要回答你觉得可能产生这样的不同的原因.

答:

Generally speaking, in the training process, the loss curve tends to decline and eventually converges. However, in the GAN model, when D_loss goes down, G_loss goes up, and vice versa. This is because, in GAN, the generator and discriminator are against each other, the generator hopes that the generated image can cheat the recognizer, and the recognizer hopes that it can find the disguise of the generator, so the performance performance of the two is often opposite.

DCGAN

在DCGAN(Deep Convolution GAN)中,最大的改变是使用了CNN代替全连接层.在生成器G中,使用stride为2的转置卷积来生成图片同时扩大图片尺寸,而在判别器D中,使用stride为2的卷积来将图片进行卷积并下采样.除此之外,DCGAN加入了在层与层之间BatchNormalization(虽然我们在普通的GAN中就已经添加),在G中使用ReLU作为激活函数,而在D中使用LeakyReLU作为激活函数.

from utils import initialize_weights
class DCGenerator(nn.Module):
    def __init__(self, image_size=32, latent_dim=64, output_channel=1):
        super(DCGenerator, self).__init__()
        self.image_size = image_size
        self.latent_dim = latent_dim
        self.output_channel = output_channel
        
        self.init_size = image_size // 8
        
        # fc: Linear -> BN -> ReLU
        self.fc = nn.Sequential(
            nn.Linear(latent_dim, 512 * self.init_size ** 2),
            nn.BatchNorm1d(512 * self.init_size ** 2),
            nn.ReLU(inplace=True)
        )
        
        # deconv: ConvTranspose2d(4, 2, 1) -> BN -> ReLU -> 
        #         ConvTranspose2d(4, 2, 1) -> BN -> ReLU -> 
        #         ConvTranspose2d(4, 2, 1) -> Tanh
        self.deconv = nn.Sequential(
            nn.ConvTranspose2d(512, 256, 4, stride=2, padding=1),
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.ConvTranspose2d(256, 128, 4, stride=2, padding=1),
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),
            nn.ConvTranspose2d(128, output_channel, 4, stride=2, padding=1),
            nn.Tanh(),
        )
        initialize_weights(self)

    def forward(self, z):
        out = self.fc(z)
        out = out.view(out.shape[0], 512, self.init_size, self.init_size)
        img = self.deconv(out)
        return img


class DCDiscriminator(nn.Module):
    def __init__(self, image_size=32, input_channel=1, sigmoid=True):
        super(DCDiscriminator, self).__init__()
        self.image_size = image_size
        self.input_channel = input_channel
        self.fc_size = image_size // 8
        
        # conv: Conv2d(3,2,1) -> LeakyReLU 
        #       Conv2d(3,2,1) -> BN -> LeakyReLU 
        #       Conv2d(3,2,1) -> BN -> LeakyReLU 
        self.conv = nn.Sequential(
            nn.Conv2d(input_channel, 128, 3, 2, 1),
            nn.LeakyReLU(0.2),
            nn.Conv2d(128, 256, 3, 2, 1),
            nn.BatchNorm2d(256),
            nn.LeakyReLU(0.2),
            nn.Conv2d(256, 512, 3, 2, 1),
            nn.BatchNorm2d(512),
            nn.LeakyReLU(0.2),
        )
        
        # fc: Linear -> Sigmoid
        self.fc = nn.Sequential(
            nn.Linear(512 * self.fc_size * self.fc_size, 1),
        )
        if sigmoid:
            self.fc.add_module('sigmoid', nn.Sigmoid())
        initialize_weights(self)
        
        

    def forward(self, img):
        out = self.conv(img)
        out = out.view(out.shape[0], -1)
        out = self.fc(out)

        return out

同样的,我们使用同样的mnist数据集对DCGAN进行训练.

# hyper params

# z dim
latent_dim = 100

# image size and channel
image_size=32
image_channel=1

# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)

# epochs and batch size
n_epochs = 100
batch_size = 32

# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:1')

# mnist dataset and dataloader
train_dataset = load_mnist_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

# use BCELoss as loss function
bceloss = nn.BCELoss().to(device)

# G and D model, use DCGAN
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel).to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss, 
                                   n_epochs, device, latent_dim)
Epoch 0: Train D loss: 0.3584, G loss: 4.7953

Epoch 1: Train D loss: 0.2336, G loss: 5.6612
Epoch 2: Train D loss: 0.2451, G loss: 5.4808
Epoch 3: Train D loss: 0.1608, G loss: 4.2449
Epoch 4: Train D loss: 0.4068, G loss: 3.9953
Epoch 5: Train D loss: 0.5673, G loss: 3.1682
Epoch 6: Train D loss: 0.5268, G loss: 2.9006
Epoch 7: Train D loss: 0.5768, G loss: 2.6063
Epoch 8: Train D loss: 0.5841, G loss: 2.5660
Epoch 9: Train D loss: 0.5123, G loss: 2.9250

Epoch 10: Train D loss: 0.3945, G loss: 2.6949
Epoch 11: Train D loss: 0.5941, G loss: 2.4399
Epoch 12: Train D loss: 0.5572, G loss: 2.3926
Epoch 13: Train D loss: 0.6046, G loss: 2.5039
Epoch 14: Train D loss: 0.6513, G loss: 2.1637
Epoch 15: Train D loss: 0.5793, G loss: 2.4047
Epoch 16: Train D loss: 0.6361, G loss: 2.4416
Epoch 17: Train D loss: 0.6193, G loss: 2.3791
Epoch 18: Train D loss: 0.5141, G loss: 2.4461
Epoch 19: Train D loss: 0.5765, G loss: 2.4536

Epoch 20: Train D loss: 0.6813, G loss: 2.2990
Epoch 21: Train D loss: 0.5250, G loss: 2.4540
Epoch 22: Train D loss: 0.6458, G loss: 2.5284
Epoch 23: Train D loss: 0.5561, G loss: 2.3798
Epoch 24: Train D loss: 0.4879, G loss: 2.6305
Epoch 25: Train D loss: 0.3962, G loss: 2.8168
Epoch 26: Train D loss: 0.5705, G loss: 2.7171
Epoch 27: Train D loss: 0.3177, G loss: 3.0004
Epoch 28: Train D loss: 0.4139, G loss: 3.0095
Epoch 29: Train D loss: 0.4706, G loss: 3.0445

Epoch 30: Train D loss: 0.2796, G loss: 3.2369
Epoch 31: Train D loss: 0.3112, G loss: 3.3666
Epoch 32: Train D loss: 0.2850, G loss: 3.5337
Epoch 33: Train D loss: 0.5064, G loss: 3.1986
Epoch 34: Train D loss: 0.3508, G loss: 3.5686
Epoch 35: Train D loss: 0.3044, G loss: 3.4096
Epoch 36: Train D loss: 0.3091, G loss: 3.6790
Epoch 37: Train D loss: 0.2097, G loss: 3.5555
Epoch 38: Train D loss: 0.0981, G loss: 4.0560
Epoch 39: Train D loss: 0.7379, G loss: 3.0092

Epoch 40: Train D loss: 0.1962, G loss: 3.4945
Epoch 41: Train D loss: 0.0956, G loss: 4.0326
Epoch 42: Train D loss: 0.5153, G loss: 3.7515
Epoch 43: Train D loss: 0.5432, G loss: 2.8158
Epoch 44: Train D loss: 0.1706, G loss: 3.7669
Epoch 45: Train D loss: 0.6248, G loss: 2.8461
Epoch 46: Train D loss: 0.1040, G loss: 3.9352
Epoch 47: Train D loss: 0.0919, G loss: 4.2825
Epoch 48: Train D loss: 0.0546, G loss: 4.6079
Epoch 49: Train D loss: 0.0652, G loss: 4.6525

Epoch 50: Train D loss: 0.7597, G loss: 3.4988
Epoch 51: Train D loss: 0.3397, G loss: 3.6112
Epoch 52: Train D loss: 0.0811, G loss: 4.2338
Epoch 53: Train D loss: 0.0490, G loss: 4.4919
Epoch 54: Train D loss: 0.0371, G loss: 4.7790
Epoch 55: Train D loss: 0.0375, G loss: 4.9030
Epoch 56: Train D loss: 0.6475, G loss: 3.8161
Epoch 57: Train D loss: 0.0770, G loss: 4.2638
Epoch 58: Train D loss: 0.0456, G loss: 4.7263
Epoch 59: Train D loss: 0.0372, G loss: 4.8959

Epoch 60: Train D loss: 0.0351, G loss: 5.1594
Epoch 61: Train D loss: 0.0281, G loss: 5.1873
Epoch 62: Train D loss: 0.8205, G loss: 4.4411
Epoch 63: Train D loss: 0.2567, G loss: 3.6164
Epoch 64: Train D loss: 0.5532, G loss: 3.3090
Epoch 65: Train D loss: 0.0831, G loss: 4.2039
Epoch 66: Train D loss: 0.0528, G loss: 4.6532
Epoch 67: Train D loss: 0.0327, G loss: 4.9808
Epoch 68: Train D loss: 0.0259, G loss: 5.1778
Epoch 69: Train D loss: 0.0264, G loss: 5.2096

Epoch 70: Train D loss: 1.2088, G loss: 2.6591
Epoch 71: Train D loss: 0.5735, G loss: 2.8551
Epoch 72: Train D loss: 0.5057, G loss: 3.3956
Epoch 73: Train D loss: 0.1402, G loss: 3.9483
Epoch 74: Train D loss: 0.0552, G loss: 4.5401
Epoch 75: Train D loss: 0.0370, G loss: 4.9205
Epoch 76: Train D loss: 0.0301, G loss: 5.0844
Epoch 77: Train D loss: 0.0247, G loss: 5.2022
Epoch 78: Train D loss: 0.0227, G loss: 5.4283
Epoch 79: Train D loss: 0.0169, G loss: 5.5034

Epoch 80: Train D loss: 0.0184, G loss: 5.5792
Epoch 81: Train D loss: 0.0159, G loss: 5.6387
Epoch 82: Train D loss: 0.0175, G loss: 5.6909
Epoch 83: Train D loss: 0.0137, G loss: 5.8635
Epoch 84: Train D loss: 0.0156, G loss: 5.8242
Epoch 85: Train D loss: 0.0149, G loss: 5.8942
Epoch 86: Train D loss: 1.2041, G loss: 3.8859
Epoch 87: Train D loss: 0.7222, G loss: 2.0960
Epoch 88: Train D loss: 0.6053, G loss: 2.9342
Epoch 89: Train D loss: 0.4476, G loss: 3.1945

Epoch 90: Train D loss: 0.1500, G loss: 4.0948
Epoch 91: Train D loss: 0.0551, G loss: 4.6313
Epoch 92: Train D loss: 0.8641, G loss: 2.7378
Epoch 93: Train D loss: 0.5070, G loss: 3.1373
Epoch 94: Train D loss: 0.0750, G loss: 4.3132
Epoch 95: Train D loss: 0.0365, G loss: 4.7940
Epoch 96: Train D loss: 0.0257, G loss: 5.0818
Epoch 97: Train D loss: 0.0231, G loss: 5.2663
Epoch 98: Train D loss: 0.0208, G loss: 5.4650
Epoch 99: Train D loss: 0.0203, G loss: 5.8589

loss_plot(d_loss_hist, g_loss_hist)

可以看到,DCGAN的生成图片质量比起只有线性层的GAN要好不少.接下来,让我们尝试使用家具数据集来训练DCGAN.

# RGB image channel = 3
image_channel=3

# epochs
n_epochs = 300

# mnist dataset and dataloader
train_dataset = load_furniture_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

# G and D model, use DCGAN
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel).to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)

d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss, 
                                   n_epochs, device, latent_dim)
Epoch 0: Train D loss: 0.8826, G loss: 3.9182

Epoch 1: Train D loss: 0.3575, G loss: 5.6344
Epoch 2: Train D loss: 0.2020, G loss: 6.2485
Epoch 3: Train D loss: 0.1214, G loss: 6.3474
Epoch 4: Train D loss: 0.1045, G loss: 6.3169
Epoch 5: Train D loss: 0.4780, G loss: 8.4332
Epoch 6: Train D loss: 0.0895, G loss: 6.2010
Epoch 7: Train D loss: 0.1221, G loss: 6.1602
Epoch 8: Train D loss: 0.0836, G loss: 5.3933
Epoch 9: Train D loss: 0.0696, G loss: 5.7825

Epoch 10: Train D loss: 0.0637, G loss: 5.8573
Epoch 11: Train D loss: 0.1293, G loss: 5.5485
Epoch 12: Train D loss: 0.5371, G loss: 6.3349
Epoch 13: Train D loss: 0.5318, G loss: 4.4861
Epoch 14: Train D loss: 0.4635, G loss: 3.9430
Epoch 15: Train D loss: 0.2749, G loss: 4.5340
Epoch 16: Train D loss: 0.3709, G loss: 4.8660
Epoch 17: Train D loss: 0.2258, G loss: 4.8558
Epoch 18: Train D loss: 0.2302, G loss: 5.6346
Epoch 19: Train D loss: 0.7460, G loss: 5.7239

Epoch 20: Train D loss: 0.2786, G loss: 4.3962
Epoch 21: Train D loss: 0.1235, G loss: 4.7795
Epoch 22: Train D loss: 0.3446, G loss: 5.7504
Epoch 23: Train D loss: 0.2496, G loss: 4.4360
Epoch 24: Train D loss: 0.2506, G loss: 4.8346
Epoch 25: Train D loss: 0.1725, G loss: 5.4387
Epoch 26: Train D loss: 0.5889, G loss: 5.5629
Epoch 27: Train D loss: 0.2373, G loss: 5.1965
Epoch 28: Train D loss: 0.5494, G loss: 6.3583
Epoch 29: Train D loss: 0.3330, G loss: 4.8053

Epoch 30: Train D loss: 0.1652, G loss: 4.3051
Epoch 31: Train D loss: 0.2587, G loss: 4.6436
Epoch 32: Train D loss: 0.1888, G loss: 4.2422
Epoch 33: Train D loss: 0.2322, G loss: 4.9604
Epoch 34: Train D loss: 0.2087, G loss: 5.1216
Epoch 35: Train D loss: 0.2832, G loss: 5.4821
Epoch 36: Train D loss: 0.2822, G loss: 4.6083
Epoch 37: Train D loss: 0.2578, G loss: 4.2546
Epoch 38: Train D loss: 0.2150, G loss: 5.0202
Epoch 39: Train D loss: 0.2933, G loss: 5.5375

Epoch 40: Train D loss: 0.3894, G loss: 5.0670
Epoch 41: Train D loss: 0.2551, G loss: 4.4468
Epoch 42: Train D loss: 0.2212, G loss: 4.7803
Epoch 43: Train D loss: 0.1708, G loss: 4.4727
Epoch 44: Train D loss: 0.2954, G loss: 5.1404
Epoch 45: Train D loss: 0.2572, G loss: 4.9857
Epoch 46: Train D loss: 0.3257, G loss: 5.6155
Epoch 47: Train D loss: 0.2031, G loss: 4.4272
Epoch 48: Train D loss: 0.3561, G loss: 4.9110
Epoch 49: Train D loss: 0.3056, G loss: 4.5646

Epoch 50: Train D loss: 0.4974, G loss: 4.4510
Epoch 51: Train D loss: 0.4855, G loss: 4.9646
Epoch 52: Train D loss: 0.3893, G loss: 4.5736
Epoch 53: Train D loss: 0.3120, G loss: 4.2307
Epoch 54: Train D loss: 0.3935, G loss: 4.1914
Epoch 55: Train D loss: 0.3238, G loss: 4.2824
Epoch 56: Train D loss: 0.3969, G loss: 4.4219
Epoch 57: Train D loss: 0.4217, G loss: 4.1336
Epoch 58: Train D loss: 0.5265, G loss: 4.2526
Epoch 59: Train D loss: 0.3533, G loss: 3.7809

Epoch 60: Train D loss: 0.3335, G loss: 4.0121
Epoch 61: Train D loss: 0.3315, G loss: 3.8697
Epoch 62: Train D loss: 0.3591, G loss: 4.1303
Epoch 63: Train D loss: 0.3372, G loss: 4.2440
Epoch 64: Train D loss: 0.4152, G loss: 3.7262
Epoch 65: Train D loss: 0.4887, G loss: 3.9588
Epoch 66: Train D loss: 0.4257, G loss: 3.6588
Epoch 67: Train D loss: 0.4146, G loss: 3.7732
Epoch 68: Train D loss: 0.4303, G loss: 3.8034
Epoch 69: Train D loss: 0.4247, G loss: 4.2691

Epoch 70: Train D loss: 0.3877, G loss: 3.6103
Epoch 71: Train D loss: 0.5465, G loss: 3.7896
Epoch 72: Train D loss: 0.3692, G loss: 3.3271
Epoch 73: Train D loss: 0.3766, G loss: 3.4794
Epoch 74: Train D loss: 0.3256, G loss: 3.5183
Epoch 75: Train D loss: 0.4479, G loss: 3.5492
Epoch 76: Train D loss: 0.3959, G loss: 3.9748
Epoch 77: Train D loss: 0.5287, G loss: 4.2756
Epoch 78: Train D loss: 0.4709, G loss: 3.6952
Epoch 79: Train D loss: 0.4082, G loss: 3.5221

Epoch 80: Train D loss: 0.3642, G loss: 3.7835
Epoch 81: Train D loss: 0.3397, G loss: 3.8078
Epoch 82: Train D loss: 0.4058, G loss: 3.3936
Epoch 83: Train D loss: 0.2689, G loss: 3.3368
Epoch 84: Train D loss: 0.5323, G loss: 3.7139
Epoch 85: Train D loss: 0.5076, G loss: 3.8075
Epoch 86: Train D loss: 0.3495, G loss: 3.4680
Epoch 87: Train D loss: 0.2997, G loss: 3.2331
Epoch 88: Train D loss: 0.2942, G loss: 3.3047
Epoch 89: Train D loss: 0.3101, G loss: 3.4135

Epoch 90: Train D loss: 0.3526, G loss: 3.7103
Epoch 91: Train D loss: 0.4572, G loss: 3.6256
Epoch 92: Train D loss: 0.3424, G loss: 3.4160
Epoch 93: Train D loss: 0.4203, G loss: 3.4029
Epoch 94: Train D loss: 0.3403, G loss: 3.4399
Epoch 95: Train D loss: 0.4230, G loss: 3.4667
Epoch 96: Train D loss: 0.4932, G loss: 3.5622
Epoch 97: Train D loss: 0.3528, G loss: 3.2962
Epoch 98: Train D loss: 0.3525, G loss: 3.2702
Epoch 99: Train D loss: 0.4020, G loss: 3.3031

Epoch 100: Train D loss: 0.3591, G loss: 3.2756
Epoch 101: Train D loss: 0.3482, G loss: 3.3725
Epoch 102: Train D loss: 0.2758, G loss: 3.2736
Epoch 103: Train D loss: 0.3691, G loss: 3.2773
Epoch 104: Train D loss: 0.6185, G loss: 3.4816
Epoch 105: Train D loss: 0.3782, G loss: 3.2442
Epoch 106: Train D loss: 0.3799, G loss: 3.2770
Epoch 107: Train D loss: 0.5000, G loss: 3.3047
Epoch 108: Train D loss: 0.3916, G loss: 3.2297
Epoch 109: Train D loss: 0.3186, G loss: 3.1177

Epoch 110: Train D loss: 0.5081, G loss: 3.1745
Epoch 111: Train D loss: 0.4254, G loss: 3.2138
Epoch 112: Train D loss: 0.4278, G loss: 3.1869
Epoch 113: Train D loss: 0.2930, G loss: 3.1655
Epoch 114: Train D loss: 0.3008, G loss: 3.1255
Epoch 115: Train D loss: 0.3868, G loss: 3.0363
Epoch 116: Train D loss: 0.3744, G loss: 3.2061
Epoch 117: Train D loss: 0.3789, G loss: 3.3022
Epoch 118: Train D loss: 0.3704, G loss: 2.9627
Epoch 119: Train D loss: 0.4459, G loss: 3.1298

Epoch 120: Train D loss: 0.3725, G loss: 3.1259
Epoch 121: Train D loss: 0.3279, G loss: 3.3062
Epoch 122: Train D loss: 0.3211, G loss: 3.3848
Epoch 123: Train D loss: 0.5216, G loss: 3.3508
Epoch 124: Train D loss: 0.4734, G loss: 3.4292
Epoch 125: Train D loss: 0.3788, G loss: 3.4549
Epoch 126: Train D loss: 0.3468, G loss: 3.2332
Epoch 127: Train D loss: 0.2986, G loss: 3.2406
Epoch 128: Train D loss: 0.3691, G loss: 3.1616
Epoch 129: Train D loss: 0.3462, G loss: 3.2481

Epoch 130: Train D loss: 0.2922, G loss: 3.2644
Epoch 131: Train D loss: 0.2857, G loss: 3.1798
Epoch 132: Train D loss: 0.2523, G loss: 3.2659
Epoch 133: Train D loss: 0.4045, G loss: 3.1569
Epoch 134: Train D loss: 0.4833, G loss: 3.3265
Epoch 135: Train D loss: 0.4088, G loss: 3.2560
Epoch 136: Train D loss: 0.2900, G loss: 3.1801
Epoch 137: Train D loss: 0.4788, G loss: 3.5529
Epoch 138: Train D loss: 0.3414, G loss: 3.3124
Epoch 139: Train D loss: 0.2751, G loss: 3.1702

Epoch 140: Train D loss: 0.5243, G loss: 3.5204
Epoch 141: Train D loss: 0.2772, G loss: 3.3065
Epoch 142: Train D loss: 0.3032, G loss: 3.4532
Epoch 143: Train D loss: 0.2614, G loss: 3.2280
Epoch 144: Train D loss: 0.2469, G loss: 3.2647
Epoch 145: Train D loss: 0.2498, G loss: 3.1976
Epoch 146: Train D loss: 0.3165, G loss: 3.3571
Epoch 147: Train D loss: 0.3557, G loss: 3.5038
Epoch 148: Train D loss: 0.3499, G loss: 3.2992
Epoch 149: Train D loss: 0.2554, G loss: 3.3167

Epoch 150: Train D loss: 0.2076, G loss: 3.4642
Epoch 151: Train D loss: 0.2359, G loss: 3.4624
Epoch 152: Train D loss: 0.3424, G loss: 3.5394
Epoch 153: Train D loss: 0.3443, G loss: 3.6575
Epoch 154: Train D loss: 0.3361, G loss: 3.5467
Epoch 155: Train D loss: 0.2470, G loss: 3.5429
Epoch 156: Train D loss: 0.2043, G loss: 3.3507
Epoch 157: Train D loss: 0.2363, G loss: 3.5225
Epoch 158: Train D loss: 0.1845, G loss: 3.3518
Epoch 159: Train D loss: 0.2713, G loss: 3.6480

Epoch 160: Train D loss: 0.2313, G loss: 3.4382
Epoch 161: Train D loss: 0.2010, G loss: 3.6045
Epoch 162: Train D loss: 0.1885, G loss: 3.6591
Epoch 163: Train D loss: 0.2191, G loss: 3.6455
Epoch 164: Train D loss: 0.1851, G loss: 3.6189
Epoch 165: Train D loss: 0.2849, G loss: 3.6551
Epoch 166: Train D loss: 1.2764, G loss: 4.3937
Epoch 167: Train D loss: 0.5067, G loss: 3.7731
Epoch 168: Train D loss: 0.2813, G loss: 3.4066
Epoch 169: Train D loss: 0.2147, G loss: 3.4605

Epoch 170: Train D loss: 0.1914, G loss: 3.3578
Epoch 171: Train D loss: 0.1747, G loss: 3.4507
Epoch 172: Train D loss: 0.1616, G loss: 3.4037
Epoch 173: Train D loss: 0.1580, G loss: 3.4331
Epoch 174: Train D loss: 0.1526, G loss: 3.5904
Epoch 175: Train D loss: 0.1580, G loss: 3.5947
Epoch 176: Train D loss: 0.1391, G loss: 3.5780
Epoch 177: Train D loss: 0.1411, G loss: 3.6461
Epoch 178: Train D loss: 0.1528, G loss: 3.6418
Epoch 179: Train D loss: 0.1816, G loss: 3.5466

Epoch 180: Train D loss: 0.1667, G loss: 3.7887
Epoch 181: Train D loss: 0.1388, G loss: 3.5844
Epoch 182: Train D loss: 0.1764, G loss: 3.7683
Epoch 183: Train D loss: 0.2063, G loss: 3.7224
Epoch 184: Train D loss: 0.2536, G loss: 3.7802
Epoch 185: Train D loss: 0.2675, G loss: 3.7328
Epoch 186: Train D loss: 0.1605, G loss: 3.7314
Epoch 187: Train D loss: 0.1448, G loss: 3.7116
Epoch 188: Train D loss: 0.1201, G loss: 3.7391
Epoch 189: Train D loss: 0.1185, G loss: 3.9045

Epoch 190: Train D loss: 0.1323, G loss: 3.9510
Epoch 191: Train D loss: 0.1313, G loss: 3.9670
Epoch 192: Train D loss: 0.1430, G loss: 3.9686
Epoch 193: Train D loss: 0.1257, G loss: 3.8924
Epoch 194: Train D loss: 0.1499, G loss: 4.1035
Epoch 195: Train D loss: 1.3633, G loss: 4.4736
Epoch 196: Train D loss: 1.1516, G loss: 4.2698
Epoch 197: Train D loss: 0.4091, G loss: 3.7980
Epoch 198: Train D loss: 0.2339, G loss: 3.4761
Epoch 199: Train D loss: 0.1893, G loss: 3.4213

Epoch 200: Train D loss: 0.1470, G loss: 3.5397
Epoch 201: Train D loss: 0.1358, G loss: 3.5806
Epoch 202: Train D loss: 0.1293, G loss: 3.6882
Epoch 203: Train D loss: 0.1181, G loss: 3.6890
Epoch 204: Train D loss: 0.1092, G loss: 3.7754
Epoch 205: Train D loss: 0.1102, G loss: 3.7271
Epoch 206: Train D loss: 0.1036, G loss: 3.7388
Epoch 207: Train D loss: 0.1185, G loss: 3.9432
Epoch 208: Train D loss: 0.1110, G loss: 4.0186
Epoch 209: Train D loss: 0.0929, G loss: 3.9429

Epoch 210: Train D loss: 0.1048, G loss: 4.0539
Epoch 211: Train D loss: 0.1091, G loss: 4.1257
Epoch 212: Train D loss: 0.1097, G loss: 3.9354
Epoch 213: Train D loss: 0.0960, G loss: 4.0158
Epoch 214: Train D loss: 0.0834, G loss: 4.0064
Epoch 215: Train D loss: 0.0850, G loss: 4.0085
Epoch 216: Train D loss: 0.0863, G loss: 4.0331
Epoch 217: Train D loss: 0.0893, G loss: 4.1036
Epoch 218: Train D loss: 0.0966, G loss: 4.0656
Epoch 219: Train D loss: 1.3569, G loss: 4.7065

Epoch 220: Train D loss: 0.5661, G loss: 4.3398
Epoch 221: Train D loss: 0.2355, G loss: 3.8037
Epoch 222: Train D loss: 0.1663, G loss: 3.8645
Epoch 223: Train D loss: 0.1204, G loss: 3.8585
Epoch 224: Train D loss: 0.1015, G loss: 3.8722
Epoch 225: Train D loss: 0.0944, G loss: 3.8770
Epoch 226: Train D loss: 0.0915, G loss: 3.8919
Epoch 227: Train D loss: 0.1014, G loss: 4.0422
Epoch 228: Train D loss: 0.0857, G loss: 4.0039
Epoch 229: Train D loss: 0.0774, G loss: 4.1197

Epoch 230: Train D loss: 0.0860, G loss: 4.0603
Epoch 231: Train D loss: 0.1111, G loss: 4.0977
Epoch 232: Train D loss: 0.0819, G loss: 4.0729
Epoch 233: Train D loss: 0.0814, G loss: 4.2246
Epoch 234: Train D loss: 0.0728, G loss: 4.1516
Epoch 235: Train D loss: 0.0670, G loss: 4.2666
Epoch 236: Train D loss: 0.0621, G loss: 4.2648
Epoch 237: Train D loss: 0.0628, G loss: 4.2586
Epoch 238: Train D loss: 0.0647, G loss: 4.2370
Epoch 239: Train D loss: 0.0699, G loss: 4.2767

Epoch 240: Train D loss: 0.0760, G loss: 4.4025
Epoch 241: Train D loss: 0.0748, G loss: 4.3139
Epoch 242: Train D loss: 0.0821, G loss: 4.2643
Epoch 243: Train D loss: 0.1257, G loss: 4.7055
Epoch 244: Train D loss: 0.1265, G loss: 4.3560
Epoch 245: Train D loss: 0.0731, G loss: 4.3900
Epoch 246: Train D loss: 0.0671, G loss: 4.4087
Epoch 247: Train D loss: 0.0687, G loss: 4.3186
Epoch 248: Train D loss: 0.0600, G loss: 4.4898
Epoch 249: Train D loss: 0.0599, G loss: 4.4394

Epoch 250: Train D loss: 0.0537, G loss: 4.4493
Epoch 251: Train D loss: 0.0620, G loss: 4.4511
Epoch 252: Train D loss: 0.0503, G loss: 4.5975
Epoch 253: Train D loss: 0.0515, G loss: 4.5494
Epoch 254: Train D loss: 0.0557, G loss: 4.6418
Epoch 255: Train D loss: 2.1966, G loss: 4.8108
Epoch 256: Train D loss: 1.6376, G loss: 3.8282
Epoch 257: Train D loss: 0.7410, G loss: 4.0253
Epoch 258: Train D loss: 0.3818, G loss: 3.6517
Epoch 259: Train D loss: 0.2499, G loss: 3.5759

Epoch 260: Train D loss: 0.1771, G loss: 3.6214
Epoch 261: Train D loss: 0.1395, G loss: 3.6892
Epoch 262: Train D loss: 0.1184, G loss: 3.7579
Epoch 263: Train D loss: 0.1154, G loss: 3.9557
Epoch 264: Train D loss: 0.1076, G loss: 3.9747
Epoch 265: Train D loss: 0.1028, G loss: 3.9015
Epoch 266: Train D loss: 0.0906, G loss: 3.9886
Epoch 267: Train D loss: 0.0865, G loss: 3.9569
Epoch 268: Train D loss: 0.0741, G loss: 4.0610
Epoch 269: Train D loss: 0.0750, G loss: 4.1080

Epoch 270: Train D loss: 0.0584, G loss: 4.1823
Epoch 271: Train D loss: 0.0580, G loss: 4.2771
Epoch 272: Train D loss: 0.0620, G loss: 4.2002
Epoch 273: Train D loss: 0.0568, G loss: 4.3134
Epoch 274: Train D loss: 0.0728, G loss: 4.4302
Epoch 275: Train D loss: 0.0595, G loss: 4.4666
Epoch 276: Train D loss: 0.0583, G loss: 4.3717
Epoch 277: Train D loss: 0.0567, G loss: 4.3564
Epoch 278: Train D loss: 0.0525, G loss: 4.3592
Epoch 279: Train D loss: 0.0518, G loss: 4.4249

Epoch 280: Train D loss: 0.0537, G loss: 4.4662
Epoch 281: Train D loss: 0.0453, G loss: 4.5406
Epoch 282: Train D loss: 0.0535, G loss: 4.4714
Epoch 283: Train D loss: 0.0508, G loss: 4.6274
Epoch 284: Train D loss: 0.0557, G loss: 4.5374
Epoch 285: Train D loss: 0.0536, G loss: 4.5408
Epoch 286: Train D loss: 0.0453, G loss: 4.5735
Epoch 287: Train D loss: 0.0442, G loss: 4.6368
Epoch 288: Train D loss: 0.0429, G loss: 4.6867
Epoch 289: Train D loss: 0.0469, G loss: 4.6189

Epoch 290: Train D loss: 0.0432, G loss: 4.7058
Epoch 291: Train D loss: 0.0428, G loss: 4.7552
Epoch 292: Train D loss: 0.0459, G loss: 4.6039
Epoch 293: Train D loss: 0.0482, G loss: 4.8490
Epoch 294: Train D loss: 0.0422, G loss: 4.7337
Epoch 295: Train D loss: 0.0427, G loss: 4.7856
Epoch 296: Train D loss: 0.0553, G loss: 4.8967
Epoch 297: Train D loss: 1.9257, G loss: 5.1035
Epoch 298: Train D loss: 1.1107, G loss: 4.4166
Epoch 299: Train D loss: 0.3915, G loss: 4.0837

loss_plot(d_loss_hist, g_loss_hist)

LSGAN

LSGAN(Least Squares GAN)将loss函数改为了 L2损失.G和D的优化目标如下图所示,

作业:

在这里,请在下方补充L2Loss的代码来实现L2损失来优化上面的目标.并使用这个loss函数在mnist数据集上训练LSGAN,并显示训练的效果图片及loss变化曲线.

提示:忽略上图的1/2.L2损失即MSEloss(均方误差),传入两个参数input_是指判别器D预测为”真实”的概率值(size为batch_size*1),target为标签1或0(size为batch_size*1).只允许使用pytorch和python的运算实现(不能直接调用MSEloss)

class L2Loss(nn.Module):
    
    def __init__(self):
        super(L2Loss, self).__init__()
    
    def forward(self, input_, target):
        """
        input_: (batch_size*1) 
        target: (batch_size*1) labels, 1 or 0
        """
        return ((input_ - target) ** 2).mean()

完成上方代码后,使用所写的L2Loss在mnist数据集上训练DCGAN.

# hyper params

# z dim
latent_dim = 100

# image size and channel
image_size=32
image_channel=1

# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)

# epochs and batch size
n_epochs = 100
batch_size = 32

# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:0')

# mnist dataset and dataloader
train_dataset = load_mnist_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

# use L2Loss as loss function
l2loss = L2Loss().to(device)

# G and D model, use DCGAN
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel).to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, l2loss, n_epochs, device, 
                                   latent_dim)
loss_plot(d_loss_hist, g_loss_hist)
Epoch 0: Train D loss: 0.0631, G loss: 0.9534

Epoch 1: Train D loss: 0.0268, G loss: 0.9953
Epoch 2: Train D loss: 0.0002, G loss: 1.0000
Epoch 3: Train D loss: 0.0001, G loss: 1.0000
Epoch 4: Train D loss: 0.0000, G loss: 1.0000
Epoch 5: Train D loss: 0.0000, G loss: 1.0000
Epoch 6: Train D loss: 0.0000, G loss: 1.0000
Epoch 7: Train D loss: 0.0000, G loss: 1.0000
Epoch 8: Train D loss: 0.0000, G loss: 1.0000
Epoch 9: Train D loss: 0.0000, G loss: 1.0000

Epoch 10: Train D loss: 0.0000, G loss: 1.0000
Epoch 11: Train D loss: 0.0000, G loss: 1.0000
Epoch 12: Train D loss: 0.0000, G loss: 1.0000
Epoch 13: Train D loss: 0.0000, G loss: 1.0000
Epoch 14: Train D loss: 0.0000, G loss: 0.9999
Epoch 15: Train D loss: 0.0155, G loss: 0.9995
Epoch 16: Train D loss: 0.0000, G loss: 0.9999
Epoch 17: Train D loss: 0.0855, G loss: 0.9992
Epoch 18: Train D loss: 1.0000, G loss: 1.0000
Epoch 19: Train D loss: 1.0000, G loss: 1.0000

Epoch 20: Train D loss: 1.0000, G loss: 1.0000
Epoch 21: Train D loss: 1.0000, G loss: 1.0000
Epoch 22: Train D loss: 1.0000, G loss: 1.0000
Epoch 23: Train D loss: 1.0000, G loss: 1.0000
Epoch 24: Train D loss: 0.9999, G loss: 1.0000
Epoch 25: Train D loss: 0.4592, G loss: 1.0000
Epoch 26: Train D loss: 0.0000, G loss: 1.0000
Epoch 27: Train D loss: 0.0000, G loss: 1.0000
Epoch 28: Train D loss: 0.0000, G loss: 1.0000
Epoch 29: Train D loss: 0.0000, G loss: 1.0000

Epoch 30: Train D loss: 0.0000, G loss: 1.0000
Epoch 31: Train D loss: 0.0000, G loss: 1.0000
Epoch 32: Train D loss: 0.0000, G loss: 1.0000
Epoch 33: Train D loss: 0.0000, G loss: 1.0000
Epoch 34: Train D loss: 0.0000, G loss: 1.0000
Epoch 35: Train D loss: 0.0000, G loss: 1.0000
Epoch 36: Train D loss: 0.0000, G loss: 1.0000
Epoch 37: Train D loss: 0.0000, G loss: 1.0000
Epoch 38: Train D loss: 0.0000, G loss: 1.0000
Epoch 39: Train D loss: 0.0000, G loss: 1.0000

Epoch 40: Train D loss: 0.0000, G loss: 1.0000
Epoch 41: Train D loss: 0.0000, G loss: 1.0000
Epoch 42: Train D loss: 0.0000, G loss: 1.0000
Epoch 43: Train D loss: 0.0000, G loss: 1.0000
Epoch 44: Train D loss: 0.0000, G loss: 1.0000
Epoch 45: Train D loss: 0.0000, G loss: 1.0000
Epoch 46: Train D loss: 0.0000, G loss: 1.0000
Epoch 47: Train D loss: 0.0000, G loss: 1.0000
Epoch 48: Train D loss: 0.0000, G loss: 1.0000
Epoch 49: Train D loss: 0.0000, G loss: 1.0000

Epoch 50: Train D loss: 0.0000, G loss: 1.0000
Epoch 51: Train D loss: 0.0000, G loss: 1.0000
Epoch 52: Train D loss: 0.0000, G loss: 1.0000
Epoch 53: Train D loss: 0.0000, G loss: 1.0000
Epoch 54: Train D loss: 0.0000, G loss: 1.0000
Epoch 55: Train D loss: 0.0000, G loss: 1.0000
Epoch 56: Train D loss: 0.0000, G loss: 1.0000
Epoch 57: Train D loss: 0.0000, G loss: 1.0000
Epoch 58: Train D loss: 0.0000, G loss: 1.0000
Epoch 59: Train D loss: 0.0000, G loss: 1.0000

Epoch 60: Train D loss: 0.0000, G loss: 1.0000
Epoch 61: Train D loss: 0.0000, G loss: 1.0000
Epoch 62: Train D loss: 0.0000, G loss: 1.0000
Epoch 63: Train D loss: 0.0000, G loss: 1.0000
Epoch 64: Train D loss: 0.0000, G loss: 1.0000
Epoch 65: Train D loss: 0.0000, G loss: 1.0000
Epoch 66: Train D loss: 0.0000, G loss: 1.0000
Epoch 67: Train D loss: 0.0000, G loss: 1.0000
Epoch 68: Train D loss: 0.0000, G loss: 1.0000
Epoch 69: Train D loss: 0.0000, G loss: 1.0000

Epoch 70: Train D loss: 0.0000, G loss: 1.0000
Epoch 71: Train D loss: 0.0000, G loss: 1.0000
Epoch 72: Train D loss: 0.0000, G loss: 1.0000
Epoch 73: Train D loss: 0.0000, G loss: 1.0000
Epoch 74: Train D loss: 0.0000, G loss: 1.0000
Epoch 75: Train D loss: 0.0000, G loss: 1.0000
Epoch 76: Train D loss: 0.0000, G loss: 1.0000
Epoch 77: Train D loss: 0.0000, G loss: 1.0000
Epoch 78: Train D loss: 0.0000, G loss: 1.0000
Epoch 79: Train D loss: 0.0000, G loss: 1.0000

Epoch 80: Train D loss: 0.0000, G loss: 1.0000
Epoch 81: Train D loss: 0.0000, G loss: 1.0000
Epoch 82: Train D loss: 0.0000, G loss: 1.0000
Epoch 83: Train D loss: 0.0000, G loss: 1.0000
Epoch 84: Train D loss: 0.0000, G loss: 1.0000
Epoch 85: Train D loss: 0.0000, G loss: 1.0000
Epoch 86: Train D loss: 0.0000, G loss: 1.0000
Epoch 87: Train D loss: 0.0000, G loss: 1.0000
Epoch 88: Train D loss: 0.0000, G loss: 1.0000
Epoch 89: Train D loss: 0.0000, G loss: 1.0000

Epoch 90: Train D loss: 0.0000, G loss: 1.0000
Epoch 91: Train D loss: 0.0000, G loss: 1.0000
Epoch 92: Train D loss: 0.0000, G loss: 1.0000
Epoch 93: Train D loss: 0.0000, G loss: 1.0000
Epoch 94: Train D loss: 0.0000, G loss: 1.0000
Epoch 95: Train D loss: 0.0000, G loss: 1.0000
Epoch 96: Train D loss: 0.0000, G loss: 1.0000
Epoch 97: Train D loss: 0.0000, G loss: 1.0000
Epoch 98: Train D loss: 0.0000, G loss: 1.0000
Epoch 99: Train D loss: 0.0000, G loss: 1.0000

WGAN

GAN依然存在着训练不稳定,模式崩溃(collapse mode,可以理解为生成的图片多样性极低)的问题(我们的数据集不一定能体现出来).WGAN(Wasserstein GAN)将传统GAN中拟合的JS散度改为Wasserstein距离.WGAN一定程度上解决了GAN训练不稳定以及模式奔溃的问题.

WGAN的判别器的优化目标变为,在满足Lipschitz连续的条件(我们可以限制w不超过某个范围来满足)下,最大化 而它会近似于真实分布与生成分布之间的Wasserstein距离.所以我们D和G的loss函数变为:

具体到在实现上,WGAN主要有3点改变:

  • 判别器D最后一层去掉sigmoid
  • 生成器G和判别器的loss不使用log
  • 每次更新判别器D后,将参数的绝对值截断到某一个固定常数c

所以我们主要重写了WGAN的训练函数,在这里,网络结构使用去除Sigmoid的DCGAN(注意初始化D时将sigmoid设置为False来去掉最后一层sigmoid).

下面是WGAN的代码实现.加入了两个参数,n_d表示每训练一次G训练D的次数,weight_clip表示截断的常数.

def wgan_train(trainloader, G, D, G_optimizer, D_optimizer, device, z_dim, n_d=2, weight_clip=0.01):
    
    """
    n_d: the number of iterations of D update per G update iteration
    weight_clip: the clipping parameters
    """
    
    D.train()
    G.train()
    
    D_total_loss = 0
    G_total_loss = 0
    
    for i, (x, _) in enumerate(trainloader):
        
        x = x.to(device)
        
        # update D network
        # D optimizer zero grads
        D_optimizer.zero_grad()
        
        # D real loss from real images
        d_real = D(x)
        d_real_loss = - d_real.mean()
        
        # D fake loss from fake images generated by G
        z = torch.rand(x.size(0), z_dim).to(device)
        g_z = G(z)
        d_fake = D(g_z)
        d_fake_loss = d_fake.mean()
        
        # D backward and step
        d_loss = d_real_loss + d_fake_loss
        d_loss.backward()
        D_optimizer.step()
        
        # D weight clip
        for params in D.parameters():
            params.data.clamp_(-weight_clip, weight_clip)
            
        D_total_loss += d_loss.item()

        # update G network
        if (i + 1) % n_d == 0:
            # G optimizer zero grads
            G_optimizer.zero_grad()

            # G loss
            g_z = G(z)
            d_fake = D(g_z)
            g_loss = - d_fake.mean()

            # G backward and step
            g_loss.backward()
            G_optimizer.step()
            
            G_total_loss += g_loss.item()
    
    return D_total_loss / len(trainloader), G_total_loss * n_d / len(trainloader)
def run_wgan(trainloader, G, D, G_optimizer, D_optimizer, n_epochs, device, latent_dim, n_d, weight_clip):
    d_loss_hist = []
    g_loss_hist = []

    for epoch in range(n_epochs):
        d_loss, g_loss = wgan_train(trainloader, G, D, G_optimizer, D_optimizer, device, 
                               z_dim=latent_dim, n_d=n_d, weight_clip=weight_clip)
        print('Epoch {}: Train D loss: {:.4f}, G loss: {:.4f}'.format(epoch, d_loss, g_loss))

        d_loss_hist.append(d_loss)
        g_loss_hist.append(g_loss)

        if epoch == 0 or (epoch + 1) % 10 == 0:
            visualize_results(G, device, latent_dim) 
    
    return d_loss_hist, g_loss_hist

接下来让我们使用写好的run_wgan来跑我们的家具(椅子)数据集,看看效果如何.

# hyper params

# z dim
latent_dim = 100

# image size and channel
image_size=32
image_channel=3

# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)

# epochs and batch size
n_epochs = 300
batch_size = 32

# n_d: the number of iterations of D update per G update iteration
n_d = 2
weight_clip=0.01

# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:0')

# mnist dataset and dataloader
train_dataset = load_furniture_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

# G and D model, use DCGAN, note that sigmoid is removed in D
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, sigmoid=False).to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)

d_loss_hist, g_loss_hist = run_wgan(trainloader, G, D, G_optimizer, D_optimizer, n_epochs, device, 
                                    latent_dim, n_d, weight_clip)
Epoch 0: Train D loss: -0.0106, G loss: -0.0003

Epoch 1: Train D loss: -0.0576, G loss: 0.0163
Epoch 2: Train D loss: -0.1321, G loss: 0.0897
Epoch 3: Train D loss: -0.2723, G loss: 0.1958
Epoch 4: Train D loss: -0.4514, G loss: 0.2948
Epoch 5: Train D loss: -0.6250, G loss: 0.3647
Epoch 6: Train D loss: -0.7757, G loss: 0.4329
Epoch 7: Train D loss: -0.7672, G loss: 0.4643
Epoch 8: Train D loss: -0.6148, G loss: 0.4314
Epoch 9: Train D loss: -0.6224, G loss: 0.4193

Epoch 10: Train D loss: -0.7804, G loss: 0.4699
Epoch 11: Train D loss: -0.6644, G loss: 0.4546
Epoch 12: Train D loss: -0.6075, G loss: 0.4116
Epoch 13: Train D loss: -0.6073, G loss: 0.4478
Epoch 14: Train D loss: -0.6728, G loss: 0.4871
Epoch 15: Train D loss: -0.6588, G loss: 0.4808
Epoch 16: Train D loss: -0.7344, G loss: 0.4943
Epoch 17: Train D loss: -0.6334, G loss: 0.4702
Epoch 18: Train D loss: -0.6585, G loss: 0.4845
Epoch 19: Train D loss: -0.6050, G loss: 0.4522

Epoch 20: Train D loss: -0.6761, G loss: 0.4530
Epoch 21: Train D loss: -0.5897, G loss: 0.4198
Epoch 22: Train D loss: -0.5987, G loss: 0.4399
Epoch 23: Train D loss: -0.6206, G loss: 0.4073
Epoch 24: Train D loss: -0.6152, G loss: 0.3896
Epoch 25: Train D loss: -0.6615, G loss: 0.4472
Epoch 26: Train D loss: -0.6395, G loss: 0.4179
Epoch 27: Train D loss: -0.6404, G loss: 0.4165
Epoch 28: Train D loss: -0.6291, G loss: 0.4365
Epoch 29: Train D loss: -0.6135, G loss: 0.4192

Epoch 30: Train D loss: -0.6053, G loss: 0.4376
Epoch 31: Train D loss: -0.6043, G loss: 0.4346
Epoch 32: Train D loss: -0.5978, G loss: 0.4226
Epoch 33: Train D loss: -0.6056, G loss: 0.4097
Epoch 34: Train D loss: -0.5741, G loss: 0.4011
Epoch 35: Train D loss: -0.5361, G loss: 0.3793
Epoch 36: Train D loss: -0.6222, G loss: 0.4149
Epoch 37: Train D loss: -0.5621, G loss: 0.4019
Epoch 38: Train D loss: -0.5724, G loss: 0.4003
Epoch 39: Train D loss: -0.5507, G loss: 0.3785

Epoch 40: Train D loss: -0.5884, G loss: 0.3888
Epoch 41: Train D loss: -0.5971, G loss: 0.3870
Epoch 42: Train D loss: -0.5294, G loss: 0.3488
Epoch 43: Train D loss: -0.5729, G loss: 0.3791
Epoch 44: Train D loss: -0.5841, G loss: 0.3650
Epoch 45: Train D loss: -0.6185, G loss: 0.3752
Epoch 46: Train D loss: -0.5427, G loss: 0.3589
Epoch 47: Train D loss: -0.6060, G loss: 0.3700
Epoch 48: Train D loss: -0.6063, G loss: 0.3592
Epoch 49: Train D loss: -0.5155, G loss: 0.3251

Epoch 50: Train D loss: -0.5045, G loss: 0.3374
Epoch 51: Train D loss: -0.5448, G loss: 0.3570
Epoch 52: Train D loss: -0.5792, G loss: 0.3710
Epoch 53: Train D loss: -0.4759, G loss: 0.3381
Epoch 54: Train D loss: -0.5579, G loss: 0.3684
Epoch 55: Train D loss: -0.5357, G loss: 0.3734
Epoch 56: Train D loss: -0.5128, G loss: 0.3042
Epoch 57: Train D loss: -0.5157, G loss: 0.3243
Epoch 58: Train D loss: -0.5798, G loss: 0.3791
Epoch 59: Train D loss: -0.5066, G loss: 0.3645

Epoch 60: Train D loss: -0.4425, G loss: 0.3200
Epoch 61: Train D loss: -0.5602, G loss: 0.3509
Epoch 62: Train D loss: -0.5013, G loss: 0.3218
Epoch 63: Train D loss: -0.5353, G loss: 0.3407
Epoch 64: Train D loss: -0.5612, G loss: 0.3407
Epoch 65: Train D loss: -0.4357, G loss: 0.2939
Epoch 66: Train D loss: -0.5619, G loss: 0.3730
Epoch 67: Train D loss: -0.4945, G loss: 0.3518
Epoch 68: Train D loss: -0.5163, G loss: 0.3254
Epoch 69: Train D loss: -0.5649, G loss: 0.3548

Epoch 70: Train D loss: -0.4887, G loss: 0.3297
Epoch 71: Train D loss: -0.4842, G loss: 0.2658
Epoch 72: Train D loss: -0.5194, G loss: 0.3188
Epoch 73: Train D loss: -0.5194, G loss: 0.3187
Epoch 74: Train D loss: -0.5243, G loss: 0.3388
Epoch 75: Train D loss: -0.5509, G loss: 0.4140
Epoch 76: Train D loss: -0.4807, G loss: 0.2785
Epoch 77: Train D loss: -0.5223, G loss: 0.3157
Epoch 78: Train D loss: -0.5197, G loss: 0.3087
Epoch 79: Train D loss: -0.5339, G loss: 0.3033

Epoch 80: Train D loss: -0.5115, G loss: 0.2956
Epoch 81: Train D loss: -0.4891, G loss: 0.2814
Epoch 82: Train D loss: -0.4961, G loss: 0.2864
Epoch 83: Train D loss: -0.5515, G loss: 0.3625
Epoch 84: Train D loss: -0.4786, G loss: 0.3352
Epoch 85: Train D loss: -0.5511, G loss: 0.3459
Epoch 86: Train D loss: -0.4491, G loss: 0.2922
Epoch 87: Train D loss: -0.5459, G loss: 0.3560
Epoch 88: Train D loss: -0.4867, G loss: 0.3294
Epoch 89: Train D loss: -0.4904, G loss: 0.2581

Epoch 90: Train D loss: -0.5138, G loss: 0.3237
Epoch 91: Train D loss: -0.4629, G loss: 0.2487
Epoch 92: Train D loss: -0.5090, G loss: 0.2779
Epoch 93: Train D loss: -0.4658, G loss: 0.2791
Epoch 94: Train D loss: -0.4897, G loss: 0.3065
Epoch 95: Train D loss: -0.4808, G loss: 0.2810
Epoch 96: Train D loss: -0.5073, G loss: 0.3268
Epoch 97: Train D loss: -0.4757, G loss: 0.2755
Epoch 98: Train D loss: -0.5227, G loss: 0.3071
Epoch 99: Train D loss: -0.4912, G loss: 0.2801

Epoch 100: Train D loss: -0.4756, G loss: 0.2732
Epoch 101: Train D loss: -0.5089, G loss: 0.3687
Epoch 102: Train D loss: -0.4929, G loss: 0.3310
Epoch 103: Train D loss: -0.4768, G loss: 0.2819
Epoch 104: Train D loss: -0.5596, G loss: 0.4116
Epoch 105: Train D loss: -0.4418, G loss: 0.2584
Epoch 106: Train D loss: -0.4968, G loss: 0.2743
Epoch 107: Train D loss: -0.4669, G loss: 0.2851
Epoch 108: Train D loss: -0.5151, G loss: 0.4155
Epoch 109: Train D loss: -0.4529, G loss: 0.2799

Epoch 110: Train D loss: -0.5324, G loss: 0.3823
Epoch 111: Train D loss: -0.4281, G loss: 0.2953
Epoch 112: Train D loss: -0.5084, G loss: 0.3092
Epoch 113: Train D loss: -0.4925, G loss: 0.2819
Epoch 114: Train D loss: -0.4616, G loss: 0.3123
Epoch 115: Train D loss: -0.4997, G loss: 0.3104
Epoch 116: Train D loss: -0.4582, G loss: 0.3500
Epoch 117: Train D loss: -0.4789, G loss: 0.2677
Epoch 118: Train D loss: -0.4680, G loss: 0.3473
Epoch 119: Train D loss: -0.4699, G loss: 0.3026

Epoch 120: Train D loss: -0.4718, G loss: 0.3763
Epoch 121: Train D loss: -0.4374, G loss: 0.2891
Epoch 122: Train D loss: -0.4969, G loss: 0.3438
Epoch 123: Train D loss: -0.4526, G loss: 0.3063
Epoch 124: Train D loss: -0.4635, G loss: 0.2811
Epoch 125: Train D loss: -0.4467, G loss: 0.3077
Epoch 126: Train D loss: -0.4583, G loss: 0.3161
Epoch 127: Train D loss: -0.4154, G loss: 0.3032
Epoch 128: Train D loss: -0.4801, G loss: 0.3131
Epoch 129: Train D loss: -0.4206, G loss: 0.2345

Epoch 130: Train D loss: -0.3962, G loss: 0.2697
Epoch 131: Train D loss: -0.4477, G loss: 0.2962
Epoch 132: Train D loss: -0.4057, G loss: 0.3145
Epoch 133: Train D loss: -0.4672, G loss: 0.2886
Epoch 134: Train D loss: -0.3937, G loss: 0.2673
Epoch 135: Train D loss: -0.4625, G loss: 0.2746
Epoch 136: Train D loss: -0.4271, G loss: 0.2796
Epoch 137: Train D loss: -0.4343, G loss: 0.2854
Epoch 138: Train D loss: -0.3950, G loss: 0.2374
Epoch 139: Train D loss: -0.4522, G loss: 0.3095

Epoch 140: Train D loss: -0.4165, G loss: 0.2887
Epoch 141: Train D loss: -0.4360, G loss: 0.2971
Epoch 142: Train D loss: -0.4191, G loss: 0.2356
Epoch 143: Train D loss: -0.4221, G loss: 0.2402
Epoch 144: Train D loss: -0.4240, G loss: 0.2723
Epoch 145: Train D loss: -0.4361, G loss: 0.2990
Epoch 146: Train D loss: -0.4285, G loss: 0.2974
Epoch 147: Train D loss: -0.4224, G loss: 0.2515
Epoch 148: Train D loss: -0.4070, G loss: 0.2657
Epoch 149: Train D loss: -0.4416, G loss: 0.2511

Epoch 150: Train D loss: -0.4336, G loss: 0.3659
Epoch 151: Train D loss: -0.4246, G loss: 0.2381
Epoch 152: Train D loss: -0.4231, G loss: 0.2843
Epoch 153: Train D loss: -0.4018, G loss: 0.2265
Epoch 154: Train D loss: -0.3876, G loss: 0.2422
Epoch 155: Train D loss: -0.4153, G loss: 0.2838
Epoch 156: Train D loss: -0.4358, G loss: 0.3378
Epoch 157: Train D loss: -0.4061, G loss: 0.2386
Epoch 158: Train D loss: -0.3833, G loss: 0.2987
Epoch 159: Train D loss: -0.4094, G loss: 0.2439

Epoch 160: Train D loss: -0.4173, G loss: 0.2386
Epoch 161: Train D loss: -0.3968, G loss: 0.2182
Epoch 162: Train D loss: -0.4177, G loss: 0.2640
Epoch 163: Train D loss: -0.3976, G loss: 0.2689
Epoch 164: Train D loss: -0.4426, G loss: 0.3512
Epoch 165: Train D loss: -0.3699, G loss: 0.2796
Epoch 166: Train D loss: -0.3726, G loss: 0.2314
Epoch 167: Train D loss: -0.3976, G loss: 0.2914
Epoch 168: Train D loss: -0.3665, G loss: 0.2248
Epoch 169: Train D loss: -0.4202, G loss: 0.2628

Epoch 170: Train D loss: -0.3846, G loss: 0.2466
Epoch 171: Train D loss: -0.3988, G loss: 0.2335
Epoch 172: Train D loss: -0.4086, G loss: 0.3301
Epoch 173: Train D loss: -0.3763, G loss: 0.2423
Epoch 174: Train D loss: -0.4017, G loss: 0.2512
Epoch 175: Train D loss: -0.4408, G loss: 0.3596
Epoch 176: Train D loss: -0.3823, G loss: 0.2586
Epoch 177: Train D loss: -0.4091, G loss: 0.3018
Epoch 178: Train D loss: -0.3913, G loss: 0.2792
Epoch 179: Train D loss: -0.4088, G loss: 0.3014

Epoch 180: Train D loss: -0.3987, G loss: 0.3195
Epoch 181: Train D loss: -0.3838, G loss: 0.2836
Epoch 182: Train D loss: -0.3876, G loss: 0.2664
Epoch 183: Train D loss: -0.4179, G loss: 0.3010
Epoch 184: Train D loss: -0.3559, G loss: 0.2098
Epoch 185: Train D loss: -0.3916, G loss: 0.2718
Epoch 186: Train D loss: -0.4099, G loss: 0.2949
Epoch 187: Train D loss: -0.4221, G loss: 0.3154
Epoch 188: Train D loss: -0.4048, G loss: 0.2882
Epoch 189: Train D loss: -0.4010, G loss: 0.3003

Epoch 190: Train D loss: -0.3700, G loss: 0.2788
Epoch 191: Train D loss: -0.3725, G loss: 0.2084
Epoch 192: Train D loss: -0.4025, G loss: 0.2461
Epoch 193: Train D loss: -0.3623, G loss: 0.2879
Epoch 194: Train D loss: -0.3987, G loss: 0.2597
Epoch 195: Train D loss: -0.3701, G loss: 0.2437
Epoch 196: Train D loss: -0.3552, G loss: 0.2434
Epoch 197: Train D loss: -0.4049, G loss: 0.3127
Epoch 198: Train D loss: -0.3510, G loss: 0.2217
Epoch 199: Train D loss: -0.3862, G loss: 0.2418

Epoch 200: Train D loss: -0.3970, G loss: 0.2884
Epoch 201: Train D loss: -0.3774, G loss: 0.2686
Epoch 202: Train D loss: -0.3748, G loss: 0.2184
Epoch 203: Train D loss: -0.4068, G loss: 0.2756
Epoch 204: Train D loss: -0.3984, G loss: 0.2915
Epoch 205: Train D loss: -0.3739, G loss: 0.3065
Epoch 206: Train D loss: -0.3673, G loss: 0.2302
Epoch 207: Train D loss: -0.3688, G loss: 0.1996
Epoch 208: Train D loss: -0.3576, G loss: 0.2274
Epoch 209: Train D loss: -0.3747, G loss: 0.2799

Epoch 210: Train D loss: -0.3675, G loss: 0.2093
Epoch 211: Train D loss: -0.3777, G loss: 0.2361
Epoch 212: Train D loss: -0.3595, G loss: 0.2464
Epoch 213: Train D loss: -0.3858, G loss: 0.2702
Epoch 214: Train D loss: -0.3698, G loss: 0.2359
Epoch 215: Train D loss: -0.3813, G loss: 0.2324
Epoch 216: Train D loss: -0.3610, G loss: 0.2069
Epoch 217: Train D loss: -0.3743, G loss: 0.2611
Epoch 218: Train D loss: -0.3711, G loss: 0.2227
Epoch 219: Train D loss: -0.3581, G loss: 0.2269

Epoch 220: Train D loss: -0.3973, G loss: 0.2548
Epoch 221: Train D loss: -0.3525, G loss: 0.2520
Epoch 222: Train D loss: -0.3413, G loss: 0.2714
Epoch 223: Train D loss: -0.3649, G loss: 0.1984
Epoch 224: Train D loss: -0.4041, G loss: 0.3108
Epoch 225: Train D loss: -0.3459, G loss: 0.2636
Epoch 226: Train D loss: -0.3570, G loss: 0.2544
Epoch 227: Train D loss: -0.3938, G loss: 0.2834
Epoch 228: Train D loss: -0.3722, G loss: 0.3131
Epoch 229: Train D loss: -0.3584, G loss: 0.2719

Epoch 230: Train D loss: -0.3695, G loss: 0.1973
Epoch 231: Train D loss: -0.3349, G loss: 0.2796
Epoch 232: Train D loss: -0.3913, G loss: 0.2729
Epoch 233: Train D loss: -0.3467, G loss: 0.2079
Epoch 234: Train D loss: -0.3574, G loss: 0.2359
Epoch 235: Train D loss: -0.3556, G loss: 0.2472
Epoch 236: Train D loss: -0.3565, G loss: 0.2231
Epoch 237: Train D loss: -0.3487, G loss: 0.2301
Epoch 238: Train D loss: -0.3491, G loss: 0.2150
Epoch 239: Train D loss: -0.3637, G loss: 0.2929

Epoch 240: Train D loss: -0.3507, G loss: 0.2080
Epoch 241: Train D loss: -0.3629, G loss: 0.2306
Epoch 242: Train D loss: -0.3469, G loss: 0.2258
Epoch 243: Train D loss: -0.3999, G loss: 0.2952
Epoch 244: Train D loss: -0.3505, G loss: 0.2388
Epoch 245: Train D loss: -0.3439, G loss: 0.1793
Epoch 246: Train D loss: -0.3796, G loss: 0.2752
Epoch 247: Train D loss: -0.3459, G loss: 0.2573
Epoch 248: Train D loss: -0.3524, G loss: 0.2127
Epoch 249: Train D loss: -0.3869, G loss: 0.2908

Epoch 250: Train D loss: -0.3353, G loss: 0.2519
Epoch 251: Train D loss: -0.3551, G loss: 0.2659
Epoch 252: Train D loss: -0.3664, G loss: 0.1991
Epoch 253: Train D loss: -0.3360, G loss: 0.2598
Epoch 254: Train D loss: -0.3350, G loss: 0.2127
Epoch 255: Train D loss: -0.3554, G loss: 0.2622
Epoch 256: Train D loss: -0.3452, G loss: 0.2632
Epoch 257: Train D loss: -0.3472, G loss: 0.2112
Epoch 258: Train D loss: -0.3721, G loss: 0.2975
Epoch 259: Train D loss: -0.3365, G loss: 0.2792

Epoch 260: Train D loss: -0.3655, G loss: 0.2291
Epoch 261: Train D loss: -0.3491, G loss: 0.1637
Epoch 262: Train D loss: -0.3584, G loss: 0.2539
Epoch 263: Train D loss: -0.3526, G loss: 0.2583
Epoch 264: Train D loss: -0.3254, G loss: 0.1698
Epoch 265: Train D loss: -0.3703, G loss: 0.2395
Epoch 266: Train D loss: -0.3373, G loss: 0.2045
Epoch 267: Train D loss: -0.3318, G loss: 0.2333
Epoch 268: Train D loss: -0.3764, G loss: 0.2877
Epoch 269: Train D loss: -0.3452, G loss: 0.2250

Epoch 270: Train D loss: -0.3288, G loss: 0.2167
Epoch 271: Train D loss: -0.3441, G loss: 0.2213
Epoch 272: Train D loss: -0.3374, G loss: 0.1973
Epoch 273: Train D loss: -0.3295, G loss: 0.1776
Epoch 274: Train D loss: -0.3518, G loss: 0.2294
Epoch 275: Train D loss: -0.3528, G loss: 0.2357
Epoch 276: Train D loss: -0.3282, G loss: 0.2365
Epoch 277: Train D loss: -0.3314, G loss: 0.2657
Epoch 278: Train D loss: -0.3430, G loss: 0.2375
Epoch 279: Train D loss: -0.3411, G loss: 0.2684

Epoch 280: Train D loss: -0.3420, G loss: 0.2176
Epoch 281: Train D loss: -0.3566, G loss: 0.2435
Epoch 282: Train D loss: -0.3164, G loss: 0.2247
Epoch 283: Train D loss: -0.3413, G loss: 0.2615
Epoch 284: Train D loss: -0.3329, G loss: 0.2564
Epoch 285: Train D loss: -0.3325, G loss: 0.2060
Epoch 286: Train D loss: -0.3658, G loss: 0.2411
Epoch 287: Train D loss: -0.3306, G loss: 0.2545
Epoch 288: Train D loss: -0.3219, G loss: 0.2016
Epoch 289: Train D loss: -0.3500, G loss: 0.2295

Epoch 290: Train D loss: -0.3106, G loss: 0.2088
Epoch 291: Train D loss: -0.3219, G loss: 0.1998
Epoch 292: Train D loss: -0.3572, G loss: 0.2716
Epoch 293: Train D loss: -0.3290, G loss: 0.2812
Epoch 294: Train D loss: -0.3273, G loss: 0.2141
Epoch 295: Train D loss: -0.3324, G loss: 0.2854
Epoch 296: Train D loss: -0.3222, G loss: 0.2421
Epoch 297: Train D loss: -0.3475, G loss: 0.2820
Epoch 298: Train D loss: -0.3196, G loss: 0.2251
Epoch 299: Train D loss: -0.3290, G loss: 0.2239

WGAN的原理我们知道,D_loss的相反数可以表示生成数据分布与真实分布的Wasserstein距离,其数值越小,表明两个分布越相似,GAN训练得越好.它的值给我们训练GAN提供了一个指标.

运行下方代码观察wgan的loss曲线,可以看到,总体上,D_loss的相反数随着epoch数增加逐渐下降,同时生成的数据也越来越逼近真实数据,这与wgan的原理是相符合的.

loss_plot(d_loss_hist, g_loss_hist)

接下来运行下面两个cell的代码,集中展示wgan的参数分布.

from utils import show_weights_hist
def show_d_params(D):
    plist = []
    for params in D.parameters():
        plist.extend(params.cpu().data.view(-1).numpy())
    show_weights_hist(plist)
show_d_params(D)
/opt/conda/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6571: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg.
  warnings.warn("The 'normed' kwarg is deprecated, and has been "

可以看到,参数都被截断在[-c, c]之间,大部分参数集中在-c和c附近.

作业:

尝试使用n_d设置为5, 3, 1等,再次训练wGAN,n_d为多少时的结果最好?

答:

When n_d= 5, we will find that the details of the generated graphics are much clearer than the other two. At this time, every five times D is trained, G is trained again. Therefore, the iterative update of G is based on a better discriminator. This can significantly improve the performance without updating G every time.

n_d = 1

# G and D model, use DCGAN, note that sigmoid is removed in D
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, sigmoid=False).to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)

d_loss_hist, g_loss_hist = run_wgan(trainloader, G, D, G_optimizer, D_optimizer, n_epochs, device, 
                                    latent_dim, n_d, weight_clip)

loss_plot(d_loss_hist, g_loss_hist)
Epoch 0: Train D loss: -0.0251, G loss: -0.0089

Epoch 1: Train D loss: -0.0200, G loss: -0.0058
Epoch 2: Train D loss: -0.0403, G loss: 0.0151
Epoch 3: Train D loss: -0.0840, G loss: 0.0692
Epoch 4: Train D loss: -0.1110, G loss: 0.1149
Epoch 5: Train D loss: -0.0798, G loss: 0.0653
Epoch 6: Train D loss: -0.0668, G loss: 0.0619
Epoch 7: Train D loss: -0.0763, G loss: 0.0924
Epoch 8: Train D loss: -0.1395, G loss: 0.1376
Epoch 9: Train D loss: -0.1790, G loss: 0.1760

Epoch 10: Train D loss: -0.1733, G loss: 0.1778
Epoch 11: Train D loss: -0.1643, G loss: 0.2132
Epoch 12: Train D loss: -0.2438, G loss: 0.2327
Epoch 13: Train D loss: -0.2688, G loss: 0.2631
Epoch 14: Train D loss: -0.2538, G loss: 0.2624
Epoch 15: Train D loss: -0.1750, G loss: 0.1571
Epoch 16: Train D loss: -0.2005, G loss: 0.1801
Epoch 17: Train D loss: -0.2626, G loss: 0.1983
Epoch 18: Train D loss: -0.2573, G loss: 0.2271
Epoch 19: Train D loss: -0.2479, G loss: 0.2566

Epoch 20: Train D loss: -0.1754, G loss: 0.2312
Epoch 21: Train D loss: -0.2361, G loss: 0.2213
Epoch 22: Train D loss: -0.4678, G loss: 0.3198
Epoch 23: Train D loss: -0.3996, G loss: 0.3100
Epoch 24: Train D loss: -0.4355, G loss: 0.3225
Epoch 25: Train D loss: -0.4151, G loss: 0.3199
Epoch 26: Train D loss: -0.3595, G loss: 0.3087
Epoch 27: Train D loss: -0.4016, G loss: 0.3302
Epoch 28: Train D loss: -0.3243, G loss: 0.2787
Epoch 29: Train D loss: -0.2890, G loss: 0.2380

Epoch 30: Train D loss: -0.1935, G loss: 0.1274
Epoch 31: Train D loss: -0.4133, G loss: 0.3306
Epoch 32: Train D loss: -0.2924, G loss: 0.2732
Epoch 33: Train D loss: -0.3298, G loss: 0.3033
Epoch 34: Train D loss: -0.3138, G loss: 0.2745
Epoch 35: Train D loss: -0.4105, G loss: 0.3589
Epoch 36: Train D loss: -0.2292, G loss: 0.2321
Epoch 37: Train D loss: -0.4472, G loss: 0.3496
Epoch 38: Train D loss: -0.3871, G loss: 0.3079
Epoch 39: Train D loss: -0.3574, G loss: 0.3200

Epoch 40: Train D loss: -0.4521, G loss: 0.3567
Epoch 41: Train D loss: -0.3822, G loss: 0.3030
Epoch 42: Train D loss: -0.3556, G loss: 0.3106
Epoch 43: Train D loss: -0.4338, G loss: 0.3545
Epoch 44: Train D loss: -0.4273, G loss: 0.3315
Epoch 45: Train D loss: -0.4402, G loss: 0.3320
Epoch 46: Train D loss: -0.3696, G loss: 0.3154
Epoch 47: Train D loss: -0.4215, G loss: 0.3088
Epoch 48: Train D loss: -0.4023, G loss: 0.3035
Epoch 49: Train D loss: -0.4106, G loss: 0.3108

Epoch 50: Train D loss: -0.4090, G loss: 0.3000
Epoch 51: Train D loss: -0.3908, G loss: 0.3033
Epoch 52: Train D loss: -0.3929, G loss: 0.3011
Epoch 53: Train D loss: -0.3975, G loss: 0.2898
Epoch 54: Train D loss: -0.3904, G loss: 0.3115
Epoch 55: Train D loss: -0.3649, G loss: 0.2771
Epoch 56: Train D loss: -0.3763, G loss: 0.2938
Epoch 57: Train D loss: -0.3817, G loss: 0.3170
Epoch 58: Train D loss: -0.3438, G loss: 0.2766
Epoch 59: Train D loss: -0.3707, G loss: 0.3001

Epoch 60: Train D loss: -0.3143, G loss: 0.2771
Epoch 61: Train D loss: -0.3585, G loss: 0.2543
Epoch 62: Train D loss: -0.2917, G loss: 0.2655
Epoch 63: Train D loss: -0.3142, G loss: 0.2470
Epoch 64: Train D loss: -0.3526, G loss: 0.2751
Epoch 65: Train D loss: -0.3124, G loss: 0.2590
Epoch 66: Train D loss: -0.3660, G loss: 0.2753
Epoch 67: Train D loss: -0.3247, G loss: 0.2614
Epoch 68: Train D loss: -0.3709, G loss: 0.2655
Epoch 69: Train D loss: -0.3724, G loss: 0.2551

Epoch 70: Train D loss: -0.3513, G loss: 0.2818
Epoch 71: Train D loss: -0.3732, G loss: 0.2758
Epoch 72: Train D loss: -0.3670, G loss: 0.2847
Epoch 73: Train D loss: -0.3787, G loss: 0.2662
Epoch 74: Train D loss: -0.3265, G loss: 0.2683
Epoch 75: Train D loss: -0.3299, G loss: 0.2699
Epoch 76: Train D loss: -0.3220, G loss: 0.2497
Epoch 77: Train D loss: -0.3586, G loss: 0.2661
Epoch 78: Train D loss: -0.3090, G loss: 0.2517
Epoch 79: Train D loss: -0.3416, G loss: 0.2879

Epoch 80: Train D loss: -0.3395, G loss: 0.2623
Epoch 81: Train D loss: -0.3159, G loss: 0.2564
Epoch 82: Train D loss: -0.3330, G loss: 0.2581
Epoch 83: Train D loss: -0.3255, G loss: 0.2388
Epoch 84: Train D loss: -0.3147, G loss: 0.2377
Epoch 85: Train D loss: -0.3027, G loss: 0.2493
Epoch 86: Train D loss: -0.2968, G loss: 0.2678
Epoch 87: Train D loss: -0.3426, G loss: 0.2661
Epoch 88: Train D loss: -0.3025, G loss: 0.2372
Epoch 89: Train D loss: -0.3201, G loss: 0.2661

Epoch 90: Train D loss: -0.3142, G loss: 0.2381
Epoch 91: Train D loss: -0.3134, G loss: 0.2794
Epoch 92: Train D loss: -0.3152, G loss: 0.2441
Epoch 93: Train D loss: -0.2759, G loss: 0.2247
Epoch 94: Train D loss: -0.3272, G loss: 0.2383
Epoch 95: Train D loss: -0.3037, G loss: 0.2421
Epoch 96: Train D loss: -0.3363, G loss: 0.2459
Epoch 97: Train D loss: -0.2995, G loss: 0.2368
Epoch 98: Train D loss: -0.3193, G loss: 0.2488
Epoch 99: Train D loss: -0.2844, G loss: 0.2406

Epoch 100: Train D loss: -0.2759, G loss: 0.2278
Epoch 101: Train D loss: -0.3214, G loss: 0.2777
Epoch 102: Train D loss: -0.2526, G loss: 0.1858
Epoch 103: Train D loss: -0.2974, G loss: 0.2650
Epoch 104: Train D loss: -0.2727, G loss: 0.2314
Epoch 105: Train D loss: -0.2981, G loss: 0.2449
Epoch 106: Train D loss: -0.2720, G loss: 0.2246
Epoch 107: Train D loss: -0.2757, G loss: 0.2265
Epoch 108: Train D loss: -0.2916, G loss: 0.2324
Epoch 109: Train D loss: -0.2873, G loss: 0.2409

Epoch 110: Train D loss: -0.3258, G loss: 0.2455
Epoch 111: Train D loss: -0.2792, G loss: 0.2165
Epoch 112: Train D loss: -0.2628, G loss: 0.2082
Epoch 113: Train D loss: -0.2472, G loss: 0.2215
Epoch 114: Train D loss: -0.3055, G loss: 0.2416
Epoch 115: Train D loss: -0.2951, G loss: 0.2404
Epoch 116: Train D loss: -0.2452, G loss: 0.2124
Epoch 117: Train D loss: -0.2905, G loss: 0.2417
Epoch 118: Train D loss: -0.2688, G loss: 0.2013
Epoch 119: Train D loss: -0.2906, G loss: 0.2098

Epoch 120: Train D loss: -0.2961, G loss: 0.2335
Epoch 121: Train D loss: -0.2849, G loss: 0.2295
Epoch 122: Train D loss: -0.3033, G loss: 0.2328
Epoch 123: Train D loss: -0.2319, G loss: 0.2089
Epoch 124: Train D loss: -0.2546, G loss: 0.2293
Epoch 125: Train D loss: -0.2590, G loss: 0.1938
Epoch 126: Train D loss: -0.3006, G loss: 0.2242
Epoch 127: Train D loss: -0.2782, G loss: 0.2226
Epoch 128: Train D loss: -0.2835, G loss: 0.2267
Epoch 129: Train D loss: -0.2987, G loss: 0.2184

Epoch 130: Train D loss: -0.2939, G loss: 0.2210
Epoch 131: Train D loss: -0.2860, G loss: 0.2122
Epoch 132: Train D loss: -0.2530, G loss: 0.2199
Epoch 133: Train D loss: -0.2879, G loss: 0.2049
Epoch 134: Train D loss: -0.2739, G loss: 0.2252
Epoch 135: Train D loss: -0.2945, G loss: 0.2069
Epoch 136: Train D loss: -0.2766, G loss: 0.2293
Epoch 137: Train D loss: -0.2912, G loss: 0.2372
Epoch 138: Train D loss: -0.2518, G loss: 0.2216
Epoch 139: Train D loss: -0.2755, G loss: 0.2106

Epoch 140: Train D loss: -0.2668, G loss: 0.2217
Epoch 141: Train D loss: -0.2914, G loss: 0.2249
Epoch 142: Train D loss: -0.2738, G loss: 0.2265
Epoch 143: Train D loss: -0.2759, G loss: 0.2196
Epoch 144: Train D loss: -0.2462, G loss: 0.2296
Epoch 145: Train D loss: -0.2633, G loss: 0.2015
Epoch 146: Train D loss: -0.2712, G loss: 0.2303
Epoch 147: Train D loss: -0.2745, G loss: 0.2223
Epoch 148: Train D loss: -0.2571, G loss: 0.1958
Epoch 149: Train D loss: -0.2893, G loss: 0.2324

Epoch 150: Train D loss: -0.2381, G loss: 0.1774
Epoch 151: Train D loss: -0.2465, G loss: 0.2190
Epoch 152: Train D loss: -0.2669, G loss: 0.2348
Epoch 153: Train D loss: -0.2786, G loss: 0.2017
Epoch 154: Train D loss: -0.2587, G loss: 0.2239
Epoch 155: Train D loss: -0.2146, G loss: 0.1925
Epoch 156: Train D loss: -0.2597, G loss: 0.1967
Epoch 157: Train D loss: -0.2501, G loss: 0.2057
Epoch 158: Train D loss: -0.2805, G loss: 0.2325
Epoch 159: Train D loss: -0.2543, G loss: 0.2218

Epoch 160: Train D loss: -0.2678, G loss: 0.2122
Epoch 161: Train D loss: -0.2413, G loss: 0.1966
Epoch 162: Train D loss: -0.2649, G loss: 0.1986
Epoch 163: Train D loss: -0.2548, G loss: 0.2159
Epoch 164: Train D loss: -0.2538, G loss: 0.2026
Epoch 165: Train D loss: -0.2507, G loss: 0.2158
Epoch 166: Train D loss: -0.2629, G loss: 0.1852
Epoch 167: Train D loss: -0.2426, G loss: 0.2239
Epoch 168: Train D loss: -0.2453, G loss: 0.2114
Epoch 169: Train D loss: -0.2481, G loss: 0.2046

Epoch 170: Train D loss: -0.2507, G loss: 0.1721
Epoch 171: Train D loss: -0.2608, G loss: 0.2228
Epoch 172: Train D loss: -0.2280, G loss: 0.2116
Epoch 173: Train D loss: -0.2886, G loss: 0.2223
Epoch 174: Train D loss: -0.2316, G loss: 0.2180
Epoch 175: Train D loss: -0.2239, G loss: 0.2092
Epoch 176: Train D loss: -0.2446, G loss: 0.1695
Epoch 177: Train D loss: -0.2518, G loss: 0.2138
Epoch 178: Train D loss: -0.2390, G loss: 0.2129
Epoch 179: Train D loss: -0.2440, G loss: 0.2033

Epoch 180: Train D loss: -0.2486, G loss: 0.2027
Epoch 181: Train D loss: -0.2479, G loss: 0.2045
Epoch 182: Train D loss: -0.2298, G loss: 0.2031
Epoch 183: Train D loss: -0.2450, G loss: 0.2097
Epoch 184: Train D loss: -0.2578, G loss: 0.1970
Epoch 185: Train D loss: -0.2115, G loss: 0.1966
Epoch 186: Train D loss: -0.2230, G loss: 0.1919
Epoch 187: Train D loss: -0.2660, G loss: 0.2122
Epoch 188: Train D loss: -0.2134, G loss: 0.1742
Epoch 189: Train D loss: -0.2074, G loss: 0.1924

Epoch 190: Train D loss: -0.2455, G loss: 0.2010
Epoch 191: Train D loss: -0.2470, G loss: 0.1843
Epoch 192: Train D loss: -0.2389, G loss: 0.1997
Epoch 193: Train D loss: -0.2394, G loss: 0.1923
Epoch 194: Train D loss: -0.2187, G loss: 0.1920
Epoch 195: Train D loss: -0.2173, G loss: 0.1878
Epoch 196: Train D loss: -0.2466, G loss: 0.1835
Epoch 197: Train D loss: -0.2346, G loss: 0.2000
Epoch 198: Train D loss: -0.2361, G loss: 0.2180
Epoch 199: Train D loss: -0.2269, G loss: 0.1883

Epoch 200: Train D loss: -0.2445, G loss: 0.1946
Epoch 201: Train D loss: -0.2367, G loss: 0.1866
Epoch 202: Train D loss: -0.2056, G loss: 0.2082
Epoch 203: Train D loss: -0.2304, G loss: 0.1808
Epoch 204: Train D loss: -0.2330, G loss: 0.1715
Epoch 205: Train D loss: -0.2090, G loss: 0.1918
Epoch 206: Train D loss: -0.2268, G loss: 0.2334
Epoch 207: Train D loss: -0.2148, G loss: 0.2041
Epoch 208: Train D loss: -0.2450, G loss: 0.1836
Epoch 209: Train D loss: -0.1970, G loss: 0.1692

Epoch 210: Train D loss: -0.2380, G loss: 0.1907
Epoch 211: Train D loss: -0.2117, G loss: 0.1849
Epoch 212: Train D loss: -0.2198, G loss: 0.1749
Epoch 213: Train D loss: -0.2262, G loss: 0.1880
Epoch 214: Train D loss: -0.2284, G loss: 0.1666
Epoch 215: Train D loss: -0.2300, G loss: 0.1870
Epoch 216: Train D loss: -0.2201, G loss: 0.1734
Epoch 217: Train D loss: -0.2072, G loss: 0.1885
Epoch 218: Train D loss: -0.2287, G loss: 0.2114
Epoch 219: Train D loss: -0.2078, G loss: 0.1776

Epoch 220: Train D loss: -0.1870, G loss: 0.2039
Epoch 221: Train D loss: -0.2357, G loss: 0.1899
Epoch 222: Train D loss: -0.2282, G loss: 0.1880
Epoch 223: Train D loss: -0.2147, G loss: 0.1792
Epoch 224: Train D loss: -0.2262, G loss: 0.1586
Epoch 225: Train D loss: -0.2214, G loss: 0.1793
Epoch 226: Train D loss: -0.2079, G loss: 0.1833
Epoch 227: Train D loss: -0.1983, G loss: 0.1563
Epoch 228: Train D loss: -0.2293, G loss: 0.1890
Epoch 229: Train D loss: -0.2059, G loss: 0.1742

Epoch 230: Train D loss: -0.2259, G loss: 0.1812
Epoch 231: Train D loss: -0.1987, G loss: 0.1810
Epoch 232: Train D loss: -0.2026, G loss: 0.1536
Epoch 233: Train D loss: -0.2209, G loss: 0.2131
Epoch 234: Train D loss: -0.2128, G loss: 0.1674
Epoch 235: Train D loss: -0.2041, G loss: 0.1710
Epoch 236: Train D loss: -0.2014, G loss: 0.1857
Epoch 237: Train D loss: -0.2257, G loss: 0.1806
Epoch 238: Train D loss: -0.2114, G loss: 0.1650
Epoch 239: Train D loss: -0.2155, G loss: 0.1767

Epoch 240: Train D loss: -0.2051, G loss: 0.1559
Epoch 241: Train D loss: -0.2053, G loss: 0.1931
Epoch 242: Train D loss: -0.2317, G loss: 0.1829
Epoch 243: Train D loss: -0.2089, G loss: 0.1763
Epoch 244: Train D loss: -0.2194, G loss: 0.1691
Epoch 245: Train D loss: -0.2043, G loss: 0.1908
Epoch 246: Train D loss: -0.2223, G loss: 0.1959
Epoch 247: Train D loss: -0.1789, G loss: 0.1945
Epoch 248: Train D loss: -0.2258, G loss: 0.1648
Epoch 249: Train D loss: -0.2205, G loss: 0.1917

Epoch 250: Train D loss: -0.2060, G loss: 0.1510
Epoch 251: Train D loss: -0.1776, G loss: 0.1665
Epoch 252: Train D loss: -0.2354, G loss: 0.1724
Epoch 253: Train D loss: -0.1970, G loss: 0.1827
Epoch 254: Train D loss: -0.2107, G loss: 0.1442
Epoch 255: Train D loss: -0.2081, G loss: 0.1920
Epoch 256: Train D loss: -0.2121, G loss: 0.1802
Epoch 257: Train D loss: -0.2293, G loss: 0.2027
Epoch 258: Train D loss: -0.2291, G loss: 0.1794
Epoch 259: Train D loss: -0.2190, G loss: 0.1685

Epoch 260: Train D loss: -0.2122, G loss: 0.1938
Epoch 261: Train D loss: -0.2077, G loss: 0.2103
Epoch 262: Train D loss: -0.2185, G loss: 0.1723
Epoch 263: Train D loss: -0.1915, G loss: 0.1955
Epoch 264: Train D loss: -0.1927, G loss: 0.1691
Epoch 265: Train D loss: -0.2193, G loss: 0.1829
Epoch 266: Train D loss: -0.2052, G loss: 0.1492
Epoch 267: Train D loss: -0.2017, G loss: 0.1643
Epoch 268: Train D loss: -0.2099, G loss: 0.1678
Epoch 269: Train D loss: -0.2328, G loss: 0.2036

Epoch 270: Train D loss: -0.2359, G loss: 0.2046
Epoch 271: Train D loss: -0.2136, G loss: 0.1649
Epoch 272: Train D loss: -0.1894, G loss: 0.1657
Epoch 273: Train D loss: -0.2170, G loss: 0.1960
Epoch 274: Train D loss: -0.2183, G loss: 0.1951
Epoch 275: Train D loss: -0.2058, G loss: 0.1936
Epoch 276: Train D loss: -0.2202, G loss: 0.1841
Epoch 277: Train D loss: -0.1980, G loss: 0.1757
Epoch 278: Train D loss: -0.2273, G loss: 0.1911
Epoch 279: Train D loss: -0.1750, G loss: 0.1747

Epoch 280: Train D loss: -0.1990, G loss: 0.1610
Epoch 281: Train D loss: -0.2045, G loss: 0.2129
Epoch 282: Train D loss: -0.1959, G loss: 0.1990
Epoch 283: Train D loss: -0.1795, G loss: 0.1501
Epoch 284: Train D loss: -0.1925, G loss: 0.1886
Epoch 285: Train D loss: -0.1922, G loss: 0.1648
Epoch 286: Train D loss: -0.1990, G loss: 0.1833
Epoch 287: Train D loss: -0.1987, G loss: 0.1909
Epoch 288: Train D loss: -0.2003, G loss: 0.1681
Epoch 289: Train D loss: -0.2046, G loss: 0.1724

Epoch 290: Train D loss: -0.2004, G loss: 0.1841
Epoch 291: Train D loss: -0.2178, G loss: 0.1841
Epoch 292: Train D loss: -0.1769, G loss: 0.1601
Epoch 293: Train D loss: -0.1852, G loss: 0.1555
Epoch 294: Train D loss: -0.1895, G loss: 0.1879
Epoch 295: Train D loss: -0.1996, G loss: 0.1534
Epoch 296: Train D loss: -0.1944, G loss: 0.1817
Epoch 297: Train D loss: -0.1926, G loss: 0.1857
Epoch 298: Train D loss: -0.2057, G loss: 0.1622
Epoch 299: Train D loss: -0.2130, G loss: 0.1960

n_d = 3

# G and D model, use DCGAN, note that sigmoid is removed in D
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, sigmoid=False).to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)

d_loss_hist, g_loss_hist = run_wgan(trainloader, G, D, G_optimizer, D_optimizer, n_epochs, device, 
                                    latent_dim, n_d, weight_clip)

loss_plot(d_loss_hist, g_loss_hist)
Epoch 0: Train D loss: 0.0069, G loss: 0.0021

Epoch 1: Train D loss: -0.0791, G loss: 0.0306
Epoch 2: Train D loss: -0.1852, G loss: 0.1159
Epoch 3: Train D loss: -0.3618, G loss: 0.2186
Epoch 4: Train D loss: -0.4753, G loss: 0.2786
Epoch 5: Train D loss: -0.6302, G loss: 0.3484
Epoch 6: Train D loss: -0.7498, G loss: 0.3949
Epoch 7: Train D loss: -0.8587, G loss: 0.4415
Epoch 8: Train D loss: -0.9714, G loss: 0.4878
Epoch 9: Train D loss: -1.0270, G loss: 0.5135

Epoch 10: Train D loss: -1.0649, G loss: 0.5341
Epoch 11: Train D loss: -0.9526, G loss: 0.5177
Epoch 12: Train D loss: -0.8284, G loss: 0.4603
Epoch 13: Train D loss: -0.9364, G loss: 0.5148
Epoch 14: Train D loss: -1.0217, G loss: 0.5523
Epoch 15: Train D loss: -0.9515, G loss: 0.4988
Epoch 16: Train D loss: -0.9435, G loss: 0.5272
Epoch 17: Train D loss: -0.8170, G loss: 0.4336
Epoch 18: Train D loss: -0.8701, G loss: 0.4690
Epoch 19: Train D loss: -0.9068, G loss: 0.5018

Epoch 20: Train D loss: -0.8681, G loss: 0.4756
Epoch 21: Train D loss: -0.8347, G loss: 0.4296
Epoch 22: Train D loss: -0.8639, G loss: 0.4728
Epoch 23: Train D loss: -0.7830, G loss: 0.4581
Epoch 24: Train D loss: -0.7746, G loss: 0.4464
Epoch 25: Train D loss: -0.8700, G loss: 0.4785
Epoch 26: Train D loss: -0.8557, G loss: 0.4636
Epoch 27: Train D loss: -0.7885, G loss: 0.4442
Epoch 28: Train D loss: -0.7860, G loss: 0.4482
Epoch 29: Train D loss: -0.7841, G loss: 0.4317

Epoch 30: Train D loss: -0.7860, G loss: 0.4485
Epoch 31: Train D loss: -0.7364, G loss: 0.3905
Epoch 32: Train D loss: -0.8096, G loss: 0.4483
Epoch 33: Train D loss: -0.7795, G loss: 0.4159
Epoch 34: Train D loss: -0.8012, G loss: 0.4285
Epoch 35: Train D loss: -0.8678, G loss: 0.4492
Epoch 36: Train D loss: -0.8304, G loss: 0.4384
Epoch 37: Train D loss: -0.7744, G loss: 0.4146
Epoch 38: Train D loss: -0.8753, G loss: 0.4655
Epoch 39: Train D loss: -0.7985, G loss: 0.4459

Epoch 40: Train D loss: -0.8734, G loss: 0.4637
Epoch 41: Train D loss: -0.7758, G loss: 0.4154
Epoch 42: Train D loss: -0.8824, G loss: 0.4854
Epoch 43: Train D loss: -0.8731, G loss: 0.4985
Epoch 44: Train D loss: -0.8091, G loss: 0.4858
Epoch 45: Train D loss: -0.8033, G loss: 0.4514
Epoch 46: Train D loss: -0.8384, G loss: 0.4637
Epoch 47: Train D loss: -0.8579, G loss: 0.4598
Epoch 48: Train D loss: -0.8450, G loss: 0.4637
Epoch 49: Train D loss: -0.8142, G loss: 0.4061

Epoch 50: Train D loss: -0.7923, G loss: 0.4501
Epoch 51: Train D loss: -0.8401, G loss: 0.4642
Epoch 52: Train D loss: -0.7387, G loss: 0.4028
Epoch 53: Train D loss: -0.7344, G loss: 0.3053
Epoch 54: Train D loss: -0.7583, G loss: 0.4553
Epoch 55: Train D loss: -0.7854, G loss: 0.4523
Epoch 56: Train D loss: -0.7285, G loss: 0.4167
Epoch 57: Train D loss: -0.7156, G loss: 0.3996
Epoch 58: Train D loss: -0.6413, G loss: 0.3391
Epoch 59: Train D loss: -0.6586, G loss: 0.3514

Epoch 60: Train D loss: -0.6450, G loss: 0.3338
Epoch 61: Train D loss: -0.7662, G loss: 0.4429
Epoch 62: Train D loss: -0.7051, G loss: 0.4120
Epoch 63: Train D loss: -0.7189, G loss: 0.4479
Epoch 64: Train D loss: -0.6523, G loss: 0.3338
Epoch 65: Train D loss: -0.7539, G loss: 0.4075
Epoch 66: Train D loss: -0.6889, G loss: 0.3944
Epoch 67: Train D loss: -0.6866, G loss: 0.3730
Epoch 68: Train D loss: -0.6780, G loss: 0.4087
Epoch 69: Train D loss: -0.6562, G loss: 0.3491

Epoch 70: Train D loss: -0.6596, G loss: 0.3492
Epoch 71: Train D loss: -0.7249, G loss: 0.4177
Epoch 72: Train D loss: -0.6668, G loss: 0.4044
Epoch 73: Train D loss: -0.6950, G loss: 0.3755
Epoch 74: Train D loss: -0.6663, G loss: 0.3680
Epoch 75: Train D loss: -0.6601, G loss: 0.3301
Epoch 76: Train D loss: -0.6001, G loss: 0.2926
Epoch 77: Train D loss: -0.6960, G loss: 0.3750
Epoch 78: Train D loss: -0.6288, G loss: 0.3416
Epoch 79: Train D loss: -0.6259, G loss: 0.3518

Epoch 80: Train D loss: -0.6243, G loss: 0.3354
Epoch 81: Train D loss: -0.6343, G loss: 0.3845
Epoch 82: Train D loss: -0.6267, G loss: 0.3438
Epoch 83: Train D loss: -0.6138, G loss: 0.3425
Epoch 84: Train D loss: -0.6404, G loss: 0.3750
Epoch 85: Train D loss: -0.6136, G loss: 0.3387
Epoch 86: Train D loss: -0.6177, G loss: 0.3202
Epoch 87: Train D loss: -0.6097, G loss: 0.3256
Epoch 88: Train D loss: -0.6460, G loss: 0.3832
Epoch 89: Train D loss: -0.6099, G loss: 0.3429

Epoch 90: Train D loss: -0.5991, G loss: 0.2513
Epoch 91: Train D loss: -0.6265, G loss: 0.3690
Epoch 92: Train D loss: -0.5959, G loss: 0.3161
Epoch 93: Train D loss: -0.6434, G loss: 0.3765
Epoch 94: Train D loss: -0.6334, G loss: 0.3379
Epoch 95: Train D loss: -0.6047, G loss: 0.3973
Epoch 96: Train D loss: -0.6496, G loss: 0.3812
Epoch 97: Train D loss: -0.5801, G loss: 0.2678
Epoch 98: Train D loss: -0.6257, G loss: 0.3552
Epoch 99: Train D loss: -0.6174, G loss: 0.3189

Epoch 100: Train D loss: -0.5911, G loss: 0.3455
Epoch 101: Train D loss: -0.5930, G loss: 0.3406
Epoch 102: Train D loss: -0.6089, G loss: 0.3152
Epoch 103: Train D loss: -0.6281, G loss: 0.4078
Epoch 104: Train D loss: -0.5217, G loss: 0.2174
Epoch 105: Train D loss: -0.6080, G loss: 0.3056
Epoch 106: Train D loss: -0.5861, G loss: 0.3415
Epoch 107: Train D loss: -0.5905, G loss: 0.3593
Epoch 108: Train D loss: -0.5922, G loss: 0.2814
Epoch 109: Train D loss: -0.5675, G loss: 0.3554

Epoch 110: Train D loss: -0.5943, G loss: 0.3412
Epoch 111: Train D loss: -0.6133, G loss: 0.3882
Epoch 112: Train D loss: -0.5472, G loss: 0.2677
Epoch 113: Train D loss: -0.6031, G loss: 0.3291
Epoch 114: Train D loss: -0.5683, G loss: 0.2826
Epoch 115: Train D loss: -0.5972, G loss: 0.3619
Epoch 116: Train D loss: -0.5728, G loss: 0.3249
Epoch 117: Train D loss: -0.5706, G loss: 0.3169
Epoch 118: Train D loss: -0.5721, G loss: 0.3839
Epoch 119: Train D loss: -0.5852, G loss: 0.3488

Epoch 120: Train D loss: -0.5471, G loss: 0.3086
Epoch 121: Train D loss: -0.6026, G loss: 0.3416
Epoch 122: Train D loss: -0.5638, G loss: 0.3033
Epoch 123: Train D loss: -0.5507, G loss: 0.3047
Epoch 124: Train D loss: -0.5985, G loss: 0.3209
Epoch 125: Train D loss: -0.5730, G loss: 0.3437
Epoch 126: Train D loss: -0.5354, G loss: 0.3027
Epoch 127: Train D loss: -0.5748, G loss: 0.3071
Epoch 128: Train D loss: -0.5841, G loss: 0.3407
Epoch 129: Train D loss: -0.5003, G loss: 0.2754

Epoch 130: Train D loss: -0.6322, G loss: 0.3722
Epoch 131: Train D loss: -0.5453, G loss: 0.3680
Epoch 132: Train D loss: -0.5515, G loss: 0.3148
Epoch 133: Train D loss: -0.5746, G loss: 0.3071
Epoch 134: Train D loss: -0.5389, G loss: 0.3534
Epoch 135: Train D loss: -0.5897, G loss: 0.3563
Epoch 136: Train D loss: -0.5403, G loss: 0.3443
Epoch 137: Train D loss: -0.5767, G loss: 0.3442
Epoch 138: Train D loss: -0.5511, G loss: 0.2635
Epoch 139: Train D loss: -0.5704, G loss: 0.3539

Epoch 140: Train D loss: -0.5745, G loss: 0.3535
Epoch 141: Train D loss: -0.5080, G loss: 0.2365
Epoch 142: Train D loss: -0.5906, G loss: 0.3833
Epoch 143: Train D loss: -0.5648, G loss: 0.3654
Epoch 144: Train D loss: -0.5041, G loss: 0.2281
Epoch 145: Train D loss: -0.5975, G loss: 0.3107
Epoch 146: Train D loss: -0.5407, G loss: 0.3014
Epoch 147: Train D loss: -0.5493, G loss: 0.3523
Epoch 148: Train D loss: -0.4956, G loss: 0.2336
Epoch 149: Train D loss: -0.5818, G loss: 0.3203

Epoch 150: Train D loss: -0.5212, G loss: 0.3169
Epoch 151: Train D loss: -0.5566, G loss: 0.3608
Epoch 152: Train D loss: -0.5806, G loss: 0.3548
Epoch 153: Train D loss: -0.5133, G loss: 0.2572
Epoch 154: Train D loss: -0.5249, G loss: 0.2424
Epoch 155: Train D loss: -0.5939, G loss: 0.4077
Epoch 156: Train D loss: -0.5221, G loss: 0.3031
Epoch 157: Train D loss: -0.5151, G loss: 0.2718
Epoch 158: Train D loss: -0.4936, G loss: 0.1471
Epoch 159: Train D loss: -0.5628, G loss: 0.3093

Epoch 160: Train D loss: -0.5257, G loss: 0.3349
Epoch 161: Train D loss: -0.5389, G loss: 0.2649
Epoch 162: Train D loss: -0.5040, G loss: 0.2714
Epoch 163: Train D loss: -0.5420, G loss: 0.3232
Epoch 164: Train D loss: -0.5358, G loss: 0.3543
Epoch 165: Train D loss: -0.5257, G loss: 0.3200
Epoch 166: Train D loss: -0.5210, G loss: 0.3078
Epoch 167: Train D loss: -0.5170, G loss: 0.3128
Epoch 168: Train D loss: -0.5233, G loss: 0.3136
Epoch 169: Train D loss: -0.4883, G loss: 0.3103

Epoch 170: Train D loss: -0.5566, G loss: 0.3195
Epoch 171: Train D loss: -0.4587, G loss: 0.2139
Epoch 172: Train D loss: -0.5817, G loss: 0.3554
Epoch 173: Train D loss: -0.4880, G loss: 0.2555
Epoch 174: Train D loss: -0.4492, G loss: 0.2087
Epoch 175: Train D loss: -0.5531, G loss: 0.3168
Epoch 176: Train D loss: -0.4614, G loss: 0.2002
Epoch 177: Train D loss: -0.5666, G loss: 0.3768
Epoch 178: Train D loss: -0.5142, G loss: 0.3343
Epoch 179: Train D loss: -0.4543, G loss: 0.2372

Epoch 180: Train D loss: -0.5546, G loss: 0.3231
Epoch 181: Train D loss: -0.4774, G loss: 0.2432
Epoch 182: Train D loss: -0.4942, G loss: 0.3133
Epoch 183: Train D loss: -0.4903, G loss: 0.1849
Epoch 184: Train D loss: -0.4952, G loss: 0.3608
Epoch 185: Train D loss: -0.4834, G loss: 0.2445
Epoch 186: Train D loss: -0.5519, G loss: 0.3770
Epoch 187: Train D loss: -0.4834, G loss: 0.2691
Epoch 188: Train D loss: -0.5027, G loss: 0.2783
Epoch 189: Train D loss: -0.5370, G loss: 0.3362

Epoch 190: Train D loss: -0.4655, G loss: 0.2525
Epoch 191: Train D loss: -0.5350, G loss: 0.3552
Epoch 192: Train D loss: -0.4721, G loss: 0.2872
Epoch 193: Train D loss: -0.5003, G loss: 0.2804
Epoch 194: Train D loss: -0.4733, G loss: 0.3368
Epoch 195: Train D loss: -0.5621, G loss: 0.3732
Epoch 196: Train D loss: -0.4547, G loss: 0.3038
Epoch 197: Train D loss: -0.5232, G loss: 0.3305
Epoch 198: Train D loss: -0.4667, G loss: 0.2655
Epoch 199: Train D loss: -0.5025, G loss: 0.3141

Epoch 200: Train D loss: -0.4612, G loss: 0.3020
Epoch 201: Train D loss: -0.4728, G loss: 0.2306
Epoch 202: Train D loss: -0.4751, G loss: 0.3059
Epoch 203: Train D loss: -0.5179, G loss: 0.3748
Epoch 204: Train D loss: -0.4711, G loss: 0.3101
Epoch 205: Train D loss: -0.4481, G loss: 0.3293
Epoch 206: Train D loss: -0.5180, G loss: 0.3090
Epoch 207: Train D loss: -0.4809, G loss: 0.3472
Epoch 208: Train D loss: -0.4929, G loss: 0.2513
Epoch 209: Train D loss: -0.4617, G loss: 0.2828

Epoch 210: Train D loss: -0.5414, G loss: 0.3868
Epoch 211: Train D loss: -0.4608, G loss: 0.2743
Epoch 212: Train D loss: -0.4882, G loss: 0.3160
Epoch 213: Train D loss: -0.4903, G loss: 0.2261
Epoch 214: Train D loss: -0.4931, G loss: 0.2156
Epoch 215: Train D loss: -0.4610, G loss: 0.2640
Epoch 216: Train D loss: -0.5045, G loss: 0.2912
Epoch 217: Train D loss: -0.4573, G loss: 0.2920
Epoch 218: Train D loss: -0.5032, G loss: 0.3111
Epoch 219: Train D loss: -0.4180, G loss: 0.2162

Epoch 220: Train D loss: -0.5277, G loss: 0.3208
Epoch 221: Train D loss: -0.4469, G loss: 0.2366
Epoch 222: Train D loss: -0.5109, G loss: 0.3173
Epoch 223: Train D loss: -0.4752, G loss: 0.3051
Epoch 224: Train D loss: -0.5223, G loss: 0.3733
Epoch 225: Train D loss: -0.4039, G loss: 0.1714
Epoch 226: Train D loss: -0.4615, G loss: 0.2363
Epoch 227: Train D loss: -0.4768, G loss: 0.3327
Epoch 228: Train D loss: -0.4357, G loss: 0.1772
Epoch 229: Train D loss: -0.5192, G loss: 0.3695

Epoch 230: Train D loss: -0.4827, G loss: 0.2624
Epoch 231: Train D loss: -0.4615, G loss: 0.2512
Epoch 232: Train D loss: -0.4385, G loss: 0.1840
Epoch 233: Train D loss: -0.4945, G loss: 0.3152
Epoch 234: Train D loss: -0.4655, G loss: 0.3278
Epoch 235: Train D loss: -0.4738, G loss: 0.3374
Epoch 236: Train D loss: -0.4532, G loss: 0.2772
Epoch 237: Train D loss: -0.4261, G loss: 0.1727
Epoch 238: Train D loss: -0.4854, G loss: 0.1727
Epoch 239: Train D loss: -0.4610, G loss: 0.2604

Epoch 240: Train D loss: -0.4557, G loss: 0.2725
Epoch 241: Train D loss: -0.4958, G loss: 0.3120
Epoch 242: Train D loss: -0.4519, G loss: 0.2336
Epoch 243: Train D loss: -0.4744, G loss: 0.3119
Epoch 244: Train D loss: -0.4304, G loss: 0.1802
Epoch 245: Train D loss: -0.4870, G loss: 0.2922
Epoch 246: Train D loss: -0.4211, G loss: 0.2864
Epoch 247: Train D loss: -0.4684, G loss: 0.2549
Epoch 248: Train D loss: -0.4288, G loss: 0.1937
Epoch 249: Train D loss: -0.5093, G loss: 0.2911

Epoch 250: Train D loss: -0.4496, G loss: 0.3173
Epoch 251: Train D loss: -0.4506, G loss: 0.2476
Epoch 252: Train D loss: -0.4545, G loss: 0.3008
Epoch 253: Train D loss: -0.4794, G loss: 0.3064
Epoch 254: Train D loss: -0.4329, G loss: 0.2461
Epoch 255: Train D loss: -0.4612, G loss: 0.3177
Epoch 256: Train D loss: -0.4364, G loss: 0.2566
Epoch 257: Train D loss: -0.4428, G loss: 0.2165
Epoch 258: Train D loss: -0.4179, G loss: 0.2762
Epoch 259: Train D loss: -0.4870, G loss: 0.3190

Epoch 260: Train D loss: -0.4257, G loss: 0.2434
Epoch 261: Train D loss: -0.3834, G loss: 0.1874
Epoch 262: Train D loss: -0.4639, G loss: 0.3219
Epoch 263: Train D loss: -0.4426, G loss: 0.2938
Epoch 264: Train D loss: -0.4858, G loss: 0.2983
Epoch 265: Train D loss: -0.4438, G loss: 0.3005
Epoch 266: Train D loss: -0.4347, G loss: 0.2685
Epoch 267: Train D loss: -0.4632, G loss: 0.2412
Epoch 268: Train D loss: -0.4347, G loss: 0.3064
Epoch 269: Train D loss: -0.4426, G loss: 0.3141

Epoch 270: Train D loss: -0.4450, G loss: 0.2698
Epoch 271: Train D loss: -0.4017, G loss: 0.1301
Epoch 272: Train D loss: -0.4728, G loss: 0.2955
Epoch 273: Train D loss: -0.4224, G loss: 0.1896
Epoch 274: Train D loss: -0.4218, G loss: 0.2128
Epoch 275: Train D loss: -0.4780, G loss: 0.2925
Epoch 276: Train D loss: -0.4397, G loss: 0.2963
Epoch 277: Train D loss: -0.4463, G loss: 0.2299
Epoch 278: Train D loss: -0.4356, G loss: 0.3044
Epoch 279: Train D loss: -0.4483, G loss: 0.2750

Epoch 280: Train D loss: -0.4312, G loss: 0.2676
Epoch 281: Train D loss: -0.4409, G loss: 0.2906
Epoch 282: Train D loss: -0.4464, G loss: 0.2933
Epoch 283: Train D loss: -0.4409, G loss: 0.1911
Epoch 284: Train D loss: -0.4241, G loss: 0.1807
Epoch 285: Train D loss: -0.4174, G loss: 0.2371
Epoch 286: Train D loss: -0.4385, G loss: 0.2776
Epoch 287: Train D loss: -0.4441, G loss: 0.3239
Epoch 288: Train D loss: -0.3909, G loss: 0.1265
Epoch 289: Train D loss: -0.4617, G loss: 0.3183

Epoch 290: Train D loss: -0.4374, G loss: 0.2967
Epoch 291: Train D loss: -0.4362, G loss: 0.2297
Epoch 292: Train D loss: -0.4295, G loss: 0.2365
Epoch 293: Train D loss: -0.4244, G loss: 0.2824
Epoch 294: Train D loss: -0.4617, G loss: 0.3120
Epoch 295: Train D loss: -0.3845, G loss: 0.1841
Epoch 296: Train D loss: -0.4179, G loss: 0.3275
Epoch 297: Train D loss: -0.3968, G loss: 0.2162
Epoch 298: Train D loss: -0.4360, G loss: 0.2535
Epoch 299: Train D loss: -0.4168, G loss: 0.1963

n_d = 5

# G and D model, use DCGAN, note that sigmoid is removed in D
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, sigmoid=False).to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)

d_loss_hist, g_loss_hist = run_wgan(trainloader, G, D, G_optimizer, D_optimizer, n_epochs, device, 
                                    latent_dim, n_d, weight_clip)

loss_plot(d_loss_hist, g_loss_hist)
Epoch 0: Train D loss: -0.0630, G loss: 0.0124

Epoch 1: Train D loss: -0.1226, G loss: 0.0588
Epoch 2: Train D loss: -0.2772, G loss: 0.1625
Epoch 3: Train D loss: -0.4880, G loss: 0.2672
Epoch 4: Train D loss: -0.6543, G loss: 0.3397
Epoch 5: Train D loss: -0.7899, G loss: 0.4041
Epoch 6: Train D loss: -0.8909, G loss: 0.4511
Epoch 7: Train D loss: -0.9759, G loss: 0.4947
Epoch 8: Train D loss: -1.0392, G loss: 0.5194
Epoch 9: Train D loss: -1.1024, G loss: 0.5463

Epoch 10: Train D loss: -1.1374, G loss: 0.5677
Epoch 11: Train D loss: -1.1750, G loss: 0.5820
Epoch 12: Train D loss: -1.2188, G loss: 0.5988
Epoch 13: Train D loss: -1.2543, G loss: 0.6115
Epoch 14: Train D loss: -1.2656, G loss: 0.6200
Epoch 15: Train D loss: -1.2664, G loss: 0.6195
Epoch 16: Train D loss: -1.2058, G loss: 0.6176
Epoch 17: Train D loss: -1.2978, G loss: 0.6354
Epoch 18: Train D loss: -1.3151, G loss: 0.6405
Epoch 19: Train D loss: -1.3089, G loss: 0.6427

Epoch 20: Train D loss: -1.2956, G loss: 0.6347
Epoch 21: Train D loss: -1.2645, G loss: 0.6462
Epoch 22: Train D loss: -1.1193, G loss: 0.6170
Epoch 23: Train D loss: -1.0726, G loss: 0.5990
Epoch 24: Train D loss: -1.2008, G loss: 0.6434
Epoch 25: Train D loss: -1.2399, G loss: 0.6336
Epoch 26: Train D loss: -1.2748, G loss: 0.6413
Epoch 27: Train D loss: -1.2918, G loss: 0.6473
Epoch 28: Train D loss: -1.3105, G loss: 0.6513
Epoch 29: Train D loss: -1.3160, G loss: 0.6507

Epoch 30: Train D loss: -1.2992, G loss: 0.6479
Epoch 31: Train D loss: -1.0788, G loss: 0.6045
Epoch 32: Train D loss: -1.1036, G loss: 0.5824
Epoch 33: Train D loss: -1.1215, G loss: 0.6005
Epoch 34: Train D loss: -0.7472, G loss: 0.5509
Epoch 35: Train D loss: -1.1456, G loss: 0.5953
Epoch 36: Train D loss: -1.1316, G loss: 0.6104
Epoch 37: Train D loss: -1.1104, G loss: 0.6178
Epoch 38: Train D loss: -0.9294, G loss: 0.5449
Epoch 39: Train D loss: -0.8962, G loss: 0.5298

Epoch 40: Train D loss: -0.9316, G loss: 0.5615
Epoch 41: Train D loss: -1.0236, G loss: 0.5511
Epoch 42: Train D loss: -1.0571, G loss: 0.5896
Epoch 43: Train D loss: -1.1424, G loss: 0.5962
Epoch 44: Train D loss: -1.1372, G loss: 0.5895
Epoch 45: Train D loss: -1.0107, G loss: 0.5562
Epoch 46: Train D loss: -1.0414, G loss: 0.5619
Epoch 47: Train D loss: -1.0015, G loss: 0.5283
Epoch 48: Train D loss: -1.0139, G loss: 0.5739
Epoch 49: Train D loss: -1.0580, G loss: 0.5779

Epoch 50: Train D loss: -1.0418, G loss: 0.5576
Epoch 51: Train D loss: -0.9481, G loss: 0.5497
Epoch 52: Train D loss: -0.9244, G loss: 0.5230
Epoch 53: Train D loss: -1.0690, G loss: 0.5790
Epoch 54: Train D loss: -0.9700, G loss: 0.5142
Epoch 55: Train D loss: -0.9639, G loss: 0.5463
Epoch 56: Train D loss: -0.9848, G loss: 0.5491
Epoch 57: Train D loss: -0.7943, G loss: 0.4534
Epoch 58: Train D loss: -1.0551, G loss: 0.5704
Epoch 59: Train D loss: -1.0164, G loss: 0.5529

Epoch 60: Train D loss: -0.8966, G loss: 0.5255
Epoch 61: Train D loss: -1.0065, G loss: 0.5540
Epoch 62: Train D loss: -0.8828, G loss: 0.4862
Epoch 63: Train D loss: -0.9472, G loss: 0.5092
Epoch 64: Train D loss: -0.9300, G loss: 0.5278
Epoch 65: Train D loss: -0.8969, G loss: 0.5086
Epoch 66: Train D loss: -0.8631, G loss: 0.4964
Epoch 67: Train D loss: -0.8524, G loss: 0.4396
Epoch 68: Train D loss: -0.8883, G loss: 0.4887
Epoch 69: Train D loss: -0.8141, G loss: 0.5004

Epoch 70: Train D loss: -0.9248, G loss: 0.4992
Epoch 71: Train D loss: -0.8239, G loss: 0.5091
Epoch 72: Train D loss: -0.8717, G loss: 0.4991
Epoch 73: Train D loss: -0.9119, G loss: 0.4988
Epoch 74: Train D loss: -0.7893, G loss: 0.4524
Epoch 75: Train D loss: -0.7797, G loss: 0.4225
Epoch 76: Train D loss: -0.8948, G loss: 0.5115
Epoch 77: Train D loss: -0.8946, G loss: 0.4762
Epoch 78: Train D loss: -0.7554, G loss: 0.4590
Epoch 79: Train D loss: -0.7923, G loss: 0.4602

Epoch 80: Train D loss: -0.8207, G loss: 0.4530
Epoch 81: Train D loss: -0.8063, G loss: 0.4188
Epoch 82: Train D loss: -0.8413, G loss: 0.4656
Epoch 83: Train D loss: -0.8797, G loss: 0.4904
Epoch 84: Train D loss: -0.6906, G loss: 0.3058
Epoch 85: Train D loss: -0.9591, G loss: 0.5327
Epoch 86: Train D loss: -0.9057, G loss: 0.5374
Epoch 87: Train D loss: -0.8035, G loss: 0.4851
Epoch 88: Train D loss: -0.7907, G loss: 0.4622
Epoch 89: Train D loss: -0.8599, G loss: 0.4548

Epoch 90: Train D loss: -0.7736, G loss: 0.3623
Epoch 91: Train D loss: -0.8764, G loss: 0.4713
Epoch 92: Train D loss: -0.8103, G loss: 0.3943
Epoch 93: Train D loss: -0.7316, G loss: 0.3623
Epoch 94: Train D loss: -0.7604, G loss: 0.4306
Epoch 95: Train D loss: -0.8484, G loss: 0.4677
Epoch 96: Train D loss: -0.7998, G loss: 0.4330
Epoch 97: Train D loss: -0.8010, G loss: 0.4270
Epoch 98: Train D loss: -0.8368, G loss: 0.4566
Epoch 99: Train D loss: -0.8658, G loss: 0.4785

Epoch 100: Train D loss: -0.8234, G loss: 0.4586
Epoch 101: Train D loss: -0.8309, G loss: 0.4406
Epoch 102: Train D loss: -0.7902, G loss: 0.4345
Epoch 103: Train D loss: -0.7636, G loss: 0.3491
Epoch 104: Train D loss: -0.7323, G loss: 0.3578
Epoch 105: Train D loss: -0.8110, G loss: 0.4529
Epoch 106: Train D loss: -0.8027, G loss: 0.4846
Epoch 107: Train D loss: -0.8100, G loss: 0.4054
Epoch 108: Train D loss: -0.7339, G loss: 0.4375
Epoch 109: Train D loss: -0.7872, G loss: 0.4110

Epoch 110: Train D loss: -0.7728, G loss: 0.4300
Epoch 111: Train D loss: -0.7609, G loss: 0.3717
Epoch 112: Train D loss: -0.7712, G loss: 0.4084
Epoch 113: Train D loss: -0.7885, G loss: 0.4314
Epoch 114: Train D loss: -0.7916, G loss: 0.4279
Epoch 115: Train D loss: -0.7877, G loss: 0.4424
Epoch 116: Train D loss: -0.7951, G loss: 0.4707
Epoch 117: Train D loss: -0.8079, G loss: 0.4778
Epoch 118: Train D loss: -0.7747, G loss: 0.4211
Epoch 119: Train D loss: -0.8120, G loss: 0.4646

Epoch 120: Train D loss: -0.7700, G loss: 0.4535
Epoch 121: Train D loss: -0.8006, G loss: 0.4753
Epoch 122: Train D loss: -0.7921, G loss: 0.4628
Epoch 123: Train D loss: -0.7840, G loss: 0.5015
Epoch 124: Train D loss: -0.7779, G loss: 0.4782
Epoch 125: Train D loss: -0.7671, G loss: 0.3502
Epoch 126: Train D loss: -0.7924, G loss: 0.4469
Epoch 127: Train D loss: -0.7849, G loss: 0.4666
Epoch 128: Train D loss: -0.7708, G loss: 0.4787
Epoch 129: Train D loss: -0.6837, G loss: 0.2903

Epoch 130: Train D loss: -0.8891, G loss: 0.4784
Epoch 131: Train D loss: -0.7105, G loss: 0.3741
Epoch 132: Train D loss: -0.8884, G loss: 0.4516
Epoch 133: Train D loss: -0.7277, G loss: 0.4051
Epoch 134: Train D loss: -0.7792, G loss: 0.4188
Epoch 135: Train D loss: -0.7584, G loss: 0.4442
Epoch 136: Train D loss: -0.8728, G loss: 0.4327
Epoch 137: Train D loss: -0.7223, G loss: 0.4547
Epoch 138: Train D loss: -0.7685, G loss: 0.4312
Epoch 139: Train D loss: -0.7625, G loss: 0.4098

Epoch 140: Train D loss: -0.7611, G loss: 0.3796
Epoch 141: Train D loss: -0.7260, G loss: 0.4300
Epoch 142: Train D loss: -0.8137, G loss: 0.4601
Epoch 143: Train D loss: -0.6957, G loss: 0.3378
Epoch 144: Train D loss: -0.7303, G loss: 0.4187
Epoch 145: Train D loss: -0.8014, G loss: 0.4597
Epoch 146: Train D loss: -0.7264, G loss: 0.3464
Epoch 147: Train D loss: -0.7313, G loss: 0.4332
Epoch 148: Train D loss: -0.7174, G loss: 0.3755
Epoch 149: Train D loss: -0.7505, G loss: 0.4173

Epoch 150: Train D loss: -0.7334, G loss: 0.3925
Epoch 151: Train D loss: -0.6950, G loss: 0.3743
Epoch 152: Train D loss: -0.7644, G loss: 0.4296
Epoch 153: Train D loss: -0.7124, G loss: 0.4091
Epoch 154: Train D loss: -0.7443, G loss: 0.4407
Epoch 155: Train D loss: -0.7033, G loss: 0.4470
Epoch 156: Train D loss: -0.7184, G loss: 0.3960
Epoch 157: Train D loss: -0.7151, G loss: 0.4138
Epoch 158: Train D loss: -0.6975, G loss: 0.2995
Epoch 159: Train D loss: -0.6625, G loss: 0.3629

Epoch 160: Train D loss: -0.6577, G loss: 0.3202
Epoch 161: Train D loss: -0.7196, G loss: 0.3440
Epoch 162: Train D loss: -0.7262, G loss: 0.4106
Epoch 163: Train D loss: -0.7068, G loss: 0.4366
Epoch 164: Train D loss: -0.6954, G loss: 0.3637
Epoch 165: Train D loss: -0.7033, G loss: 0.4257
Epoch 166: Train D loss: -0.6895, G loss: 0.3711
Epoch 167: Train D loss: -0.7114, G loss: 0.3665
Epoch 168: Train D loss: -0.6578, G loss: 0.2492
Epoch 169: Train D loss: -0.7039, G loss: 0.3997

Epoch 170: Train D loss: -0.7020, G loss: 0.4111
Epoch 171: Train D loss: -0.6601, G loss: 0.4192
Epoch 172: Train D loss: -0.6616, G loss: 0.3571
Epoch 173: Train D loss: -0.6807, G loss: 0.3974
Epoch 174: Train D loss: -0.6862, G loss: 0.4131
Epoch 175: Train D loss: -0.6907, G loss: 0.4605
Epoch 176: Train D loss: -0.6844, G loss: 0.3683
Epoch 177: Train D loss: -0.6664, G loss: 0.3714
Epoch 178: Train D loss: -0.7454, G loss: 0.4676
Epoch 179: Train D loss: -0.6347, G loss: 0.3714

Epoch 180: Train D loss: -0.6601, G loss: 0.3603
Epoch 181: Train D loss: -0.6814, G loss: 0.3628
Epoch 182: Train D loss: -0.6717, G loss: 0.3569
Epoch 183: Train D loss: -0.6995, G loss: 0.4094
Epoch 184: Train D loss: -0.7084, G loss: 0.3696
Epoch 185: Train D loss: -0.6244, G loss: 0.4075
Epoch 186: Train D loss: -0.7098, G loss: 0.3895
Epoch 187: Train D loss: -0.6938, G loss: 0.4319
Epoch 188: Train D loss: -0.6255, G loss: 0.3690
Epoch 189: Train D loss: -0.6778, G loss: 0.4144

Epoch 190: Train D loss: -0.6428, G loss: 0.3206
Epoch 191: Train D loss: -0.6384, G loss: 0.3057
Epoch 192: Train D loss: -0.6352, G loss: 0.3627
Epoch 193: Train D loss: -0.6885, G loss: 0.4062
Epoch 194: Train D loss: -0.6803, G loss: 0.3504
Epoch 195: Train D loss: -0.6453, G loss: 0.3176
Epoch 196: Train D loss: -0.6285, G loss: 0.3247
Epoch 197: Train D loss: -0.6595, G loss: 0.3622
Epoch 198: Train D loss: -0.6835, G loss: 0.4183
Epoch 199: Train D loss: -0.6227, G loss: 0.2633

Epoch 200: Train D loss: -0.6513, G loss: 0.3389
Epoch 201: Train D loss: -0.6568, G loss: 0.4569
Epoch 202: Train D loss: -0.6956, G loss: 0.3941
Epoch 203: Train D loss: -0.6654, G loss: 0.4216
Epoch 204: Train D loss: -0.6287, G loss: 0.3573
Epoch 205: Train D loss: -0.6250, G loss: 0.4319
Epoch 206: Train D loss: -0.6547, G loss: 0.3575
Epoch 207: Train D loss: -0.5743, G loss: 0.1798
Epoch 208: Train D loss: -0.6518, G loss: 0.3302
Epoch 209: Train D loss: -0.6270, G loss: 0.2731

Epoch 210: Train D loss: -0.6296, G loss: 0.3514
Epoch 211: Train D loss: -0.6564, G loss: 0.4021
Epoch 212: Train D loss: -0.6237, G loss: 0.3662
Epoch 213: Train D loss: -0.6392, G loss: 0.3149
Epoch 214: Train D loss: -0.6239, G loss: 0.4142
Epoch 215: Train D loss: -0.6398, G loss: 0.2743
Epoch 216: Train D loss: -0.6311, G loss: 0.3336
Epoch 217: Train D loss: -0.7337, G loss: 0.3800
Epoch 218: Train D loss: -0.6311, G loss: 0.3804
Epoch 219: Train D loss: -0.5960, G loss: 0.3615

Epoch 220: Train D loss: -0.6772, G loss: 0.3999
Epoch 221: Train D loss: -0.6494, G loss: 0.3504
Epoch 222: Train D loss: -0.5807, G loss: 0.3152
Epoch 223: Train D loss: -0.6730, G loss: 0.3661
Epoch 224: Train D loss: -0.6946, G loss: 0.4380
Epoch 225: Train D loss: -0.5530, G loss: 0.2479
Epoch 226: Train D loss: -0.6321, G loss: 0.3542
Epoch 227: Train D loss: -0.6699, G loss: 0.3984
Epoch 228: Train D loss: -0.5483, G loss: 0.2881
Epoch 229: Train D loss: -0.6327, G loss: 0.3045

Epoch 230: Train D loss: -0.6451, G loss: 0.3731
Epoch 231: Train D loss: -0.6219, G loss: 0.3793
Epoch 232: Train D loss: -0.6306, G loss: 0.4272
Epoch 233: Train D loss: -0.6088, G loss: 0.3027
Epoch 234: Train D loss: -0.6544, G loss: 0.4276
Epoch 235: Train D loss: -0.5706, G loss: 0.2841
Epoch 236: Train D loss: -0.6574, G loss: 0.4313
Epoch 237: Train D loss: -0.6459, G loss: 0.3720
Epoch 238: Train D loss: -0.5634, G loss: 0.3166
Epoch 239: Train D loss: -0.5994, G loss: 0.3035

Epoch 240: Train D loss: -0.6449, G loss: 0.3554
Epoch 241: Train D loss: -0.6107, G loss: 0.2231
Epoch 242: Train D loss: -0.5727, G loss: 0.2675
Epoch 243: Train D loss: -0.6637, G loss: 0.3722
Epoch 244: Train D loss: -0.5934, G loss: 0.4300
Epoch 245: Train D loss: -0.6117, G loss: 0.3971
Epoch 246: Train D loss: -0.6361, G loss: 0.3589
Epoch 247: Train D loss: -0.6369, G loss: 0.3707
Epoch 248: Train D loss: -0.6081, G loss: 0.4185
Epoch 249: Train D loss: -0.6003, G loss: 0.2571

Epoch 250: Train D loss: -0.6226, G loss: 0.4106
Epoch 251: Train D loss: -0.6126, G loss: 0.3514
Epoch 252: Train D loss: -0.5793, G loss: 0.3305
Epoch 253: Train D loss: -0.5840, G loss: 0.3235
Epoch 254: Train D loss: -0.6119, G loss: 0.2844
Epoch 255: Train D loss: -0.6228, G loss: 0.3958
Epoch 256: Train D loss: -0.5841, G loss: 0.3458
Epoch 257: Train D loss: -0.5996, G loss: 0.3575
Epoch 258: Train D loss: -0.5883, G loss: 0.2760
Epoch 259: Train D loss: -0.5904, G loss: 0.3796

Epoch 260: Train D loss: -0.6249, G loss: 0.3378
Epoch 261: Train D loss: -0.5692, G loss: 0.2254
Epoch 262: Train D loss: -0.5890, G loss: 0.3015
Epoch 263: Train D loss: -0.5707, G loss: 0.2532
Epoch 264: Train D loss: -0.6061, G loss: 0.4092
Epoch 265: Train D loss: -0.5926, G loss: 0.3644
Epoch 266: Train D loss: -0.5772, G loss: 0.3898
Epoch 267: Train D loss: -0.5789, G loss: 0.3077
Epoch 268: Train D loss: -0.5762, G loss: 0.3402
Epoch 269: Train D loss: -0.5707, G loss: 0.2558

Epoch 270: Train D loss: -0.5700, G loss: 0.3652
Epoch 271: Train D loss: -0.5834, G loss: 0.2866
Epoch 272: Train D loss: -0.5683, G loss: 0.3190
Epoch 273: Train D loss: -0.5616, G loss: 0.3597
Epoch 274: Train D loss: -0.5502, G loss: 0.3622
Epoch 275: Train D loss: -0.5850, G loss: 0.3506
Epoch 276: Train D loss: -0.5860, G loss: 0.2956
Epoch 277: Train D loss: -0.5778, G loss: 0.4034
Epoch 278: Train D loss: -0.6024, G loss: 0.4465
Epoch 279: Train D loss: -0.5659, G loss: 0.3424

Epoch 280: Train D loss: -0.5398, G loss: 0.2564
Epoch 281: Train D loss: -0.5926, G loss: 0.2978
Epoch 282: Train D loss: -0.5837, G loss: 0.3241
Epoch 283: Train D loss: -0.5839, G loss: 0.3225
Epoch 284: Train D loss: -0.5587, G loss: 0.1916
Epoch 285: Train D loss: -0.5656, G loss: 0.3763
Epoch 286: Train D loss: -0.5593, G loss: 0.3103
Epoch 287: Train D loss: -0.5779, G loss: 0.2773
Epoch 288: Train D loss: -0.5813, G loss: 0.3878
Epoch 289: Train D loss: -0.6136, G loss: 0.4114

Epoch 290: Train D loss: -0.5437, G loss: 0.3981
Epoch 291: Train D loss: -0.5895, G loss: 0.4018
Epoch 292: Train D loss: -0.5595, G loss: 0.3615
Epoch 293: Train D loss: -0.5514, G loss: 0.2601
Epoch 294: Train D loss: -0.5468, G loss: 0.3513
Epoch 295: Train D loss: -0.6066, G loss: 0.3609
Epoch 296: Train D loss: -0.5875, G loss: 0.3668
Epoch 297: Train D loss: -0.5536, G loss: 0.2995
Epoch 298: Train D loss: -0.5507, G loss: 0.2963
Epoch 299: Train D loss: -0.5845, G loss: 0.2848

WGAN-GP(improved wgan)

在WGAN中,需要进行截断, 在实验中发现: 对于比较深的WAGN,它不容易收敛。

大致原因如下:

  1. 实验发现最后大多数的权重都在-c 和c上,这就意味了大部分权重只有两个可能数,这太简单了,作为一个深度神经网络来说,这实在是对它强大的拟合能力的浪费.
  2. 实验发现容易导致梯度消失或梯度爆炸。判别器是一个多层网络,如果把clip的值设得稍微小了一点,每经过一层网络,梯度就变小一点点,多层之后就会指数衰减;反之,则容易导致梯度爆炸.

所以WGAN-GP使用了Gradient penalty(梯度惩罚)来代替clip. 因为Lipschitz限制是要求判别器的梯度不超过K,所以可以直接使用一个loss term来实现这一点,所以改进后D的优化目标改进为如下:

下面是WGAN-GP的具体代码实现,同WGAN,我们也只实现了他的训练代码,而模型我们直接使用DCGAN的模型.

import torch.autograd as autograd

def wgan_gp_train(trainloader, G, D, G_optimizer, D_optimizer, device, z_dim, lambda_=10, n_d=2):
    
    D.train()
    G.train()
    
    D_total_loss = 0
    G_total_loss = 0
    
    
    for i, (x, _) in enumerate(trainloader):
        x = x.to(device)

        # update D network
        # D optimizer zero grads
        D_optimizer.zero_grad()
        
        # D real loss from real images
        d_real = D(x)
        d_real_loss = - d_real.mean()
        
        # D fake loss from fake images generated by G
        z = torch.rand(x.size(0), z_dim).to(device)
        g_z = G(z)
        d_fake = D(g_z)
        d_fake_loss = d_fake.mean()
        
        # D gradient penalty
        
        #   a random number epsilon
        epsilon = torch.rand(x.size(0), 1, 1, 1).cuda()
        x_hat = epsilon * x + (1 - epsilon) * g_z
        x_hat.requires_grad_(True)

        y_hat = D(x_hat)
        #   computes the sum of gradients of y_hat with regard to x_hat
        gradients = autograd.grad(outputs=y_hat, inputs=x_hat, grad_outputs=torch.ones(y_hat.size()).cuda(),
                                  create_graph=True, retain_graph=True, only_inputs=True)[0]
        #   computes gradientpenalty
        gradient_penalty =  torch.mean((gradients.view(gradients.size()[0], -1).norm(p=2, dim=1) - 1) ** 2)
        
        # D backward and step
        d_loss = d_real_loss + d_fake_loss + lambda_ * gradient_penalty
        d_loss.backward()
        D_optimizer.step()
        
            
        D_total_loss += d_loss.item()

        # update G network
        # G optimizer zero grads
        if (i + 1) % n_d == 0:
            G_optimizer.zero_grad()

            # G loss
            g_z = G(z)
            d_fake = D(g_z)
            g_loss = - d_fake.mean()

            # G backward and step
            g_loss.backward()
            G_optimizer.step()
            
            G_total_loss += g_loss.item()
    
    return D_total_loss / len(trainloader), G_total_loss * n_d / len(trainloader)
# hyper params

# z dim
latent_dim = 100

# image size and channel
image_size=32
image_channel=3

# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)

# epochs and batch size
n_epochs = 300
batch_size = 32

# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:0')

# n_d: train D
n_d = 2
lambda_ = 10

# mnist dataset and dataloader
train_dataset = load_furniture_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

# G and D model, use DCGAN, note that sigmoid is removed in D
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, sigmoid=False).to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)

d_loss_hist = []
g_loss_hist = []

for epoch in range(n_epochs):
    d_loss, g_loss = wgan_gp_train(trainloader, G, D, G_optimizer, D_optimizer, device, 
                           z_dim=latent_dim, lambda_=lambda_, n_d=n_d)
    print('Epoch {}: Train D loss: {:.4f}, G loss: {:.4f}'.format(epoch, d_loss, g_loss))
    
    d_loss_hist.append(d_loss)
    g_loss_hist.append(g_loss)
    
    if epoch == 0 or (epoch + 1) % 10 == 0:
        visualize_results(G, device, latent_dim)
Epoch 0: Train D loss: 1.1936, G loss: 2.7239

Epoch 1: Train D loss: -8.1520, G loss: 8.7105
Epoch 2: Train D loss: -14.5335, G loss: 15.9505
Epoch 3: Train D loss: -22.4751, G loss: 25.4797
Epoch 4: Train D loss: -25.5143, G loss: 26.5167
Epoch 5: Train D loss: -20.2827, G loss: 20.9673
Epoch 6: Train D loss: -15.2205, G loss: 17.7352
Epoch 7: Train D loss: -15.0674, G loss: 17.9785
Epoch 8: Train D loss: -14.2372, G loss: 19.3913
Epoch 9: Train D loss: -13.6457, G loss: 19.7493

Epoch 10: Train D loss: -12.9571, G loss: 20.5028
Epoch 11: Train D loss: -12.0761, G loss: 20.7169
Epoch 12: Train D loss: -12.5201, G loss: 21.4914
Epoch 13: Train D loss: -12.7979, G loss: 20.8781
Epoch 14: Train D loss: -11.8754, G loss: 21.4311
Epoch 15: Train D loss: -12.0360, G loss: 22.1997
Epoch 16: Train D loss: -12.3443, G loss: 21.8415
Epoch 17: Train D loss: -12.4492, G loss: 22.3451
Epoch 18: Train D loss: -12.4704, G loss: 23.1174
Epoch 19: Train D loss: -12.0635, G loss: 24.3485

Epoch 20: Train D loss: -11.5159, G loss: 23.7863
Epoch 21: Train D loss: -10.8694, G loss: 23.1774
Epoch 22: Train D loss: -11.7171, G loss: 23.6735
Epoch 23: Train D loss: -12.1799, G loss: 24.5387
Epoch 24: Train D loss: -11.2967, G loss: 24.4599
Epoch 25: Train D loss: -9.2917, G loss: 25.2789
Epoch 26: Train D loss: -11.7295, G loss: 24.9656
Epoch 27: Train D loss: -11.9890, G loss: 25.1133
Epoch 28: Train D loss: -11.0419, G loss: 26.9544
Epoch 29: Train D loss: -11.4329, G loss: 27.7644

Epoch 30: Train D loss: -3.6197, G loss: 26.4551
Epoch 31: Train D loss: -6.5136, G loss: 30.6321
Epoch 32: Train D loss: -8.2981, G loss: 29.5923
Epoch 33: Train D loss: -9.9158, G loss: 28.7154
Epoch 34: Train D loss: -9.8465, G loss: 27.9302
Epoch 35: Train D loss: -10.8185, G loss: 29.2933
Epoch 36: Train D loss: -11.1391, G loss: 29.2398
Epoch 37: Train D loss: -9.0321, G loss: 30.8328
Epoch 38: Train D loss: -11.3229, G loss: 30.4662
Epoch 39: Train D loss: -10.1199, G loss: 29.7129

Epoch 40: Train D loss: -9.8457, G loss: 31.1181
Epoch 41: Train D loss: -6.8791, G loss: 31.1178
Epoch 42: Train D loss: -5.5407, G loss: 27.7953
Epoch 43: Train D loss: -8.3332, G loss: 30.8901
Epoch 44: Train D loss: -7.7765, G loss: 29.8677
Epoch 45: Train D loss: -4.2571, G loss: 31.1822
Epoch 46: Train D loss: -7.2122, G loss: 30.9042
Epoch 47: Train D loss: -8.4089, G loss: 31.2014
Epoch 48: Train D loss: -6.9788, G loss: 30.3707
Epoch 49: Train D loss: -7.2731, G loss: 30.4241

Epoch 50: Train D loss: -7.9410, G loss: 29.6069
Epoch 51: Train D loss: -6.8505, G loss: 30.9109
Epoch 52: Train D loss: -7.0091, G loss: 30.5453
Epoch 53: Train D loss: -6.3319, G loss: 30.9075
Epoch 54: Train D loss: -6.5992, G loss: 29.9271
Epoch 55: Train D loss: -7.0126, G loss: 29.5541
Epoch 56: Train D loss: -5.8230, G loss: 29.5403
Epoch 57: Train D loss: -5.0243, G loss: 28.8887
Epoch 58: Train D loss: -6.7128, G loss: 31.2758
Epoch 59: Train D loss: -5.2409, G loss: 29.0315

Epoch 60: Train D loss: -6.7178, G loss: 28.8515
Epoch 61: Train D loss: -5.8846, G loss: 30.3054
Epoch 62: Train D loss: -6.2823, G loss: 29.3366
Epoch 63: Train D loss: -6.1211, G loss: 28.1004
Epoch 64: Train D loss: -5.4646, G loss: 30.4576
Epoch 65: Train D loss: -6.5130, G loss: 28.9978
Epoch 66: Train D loss: -5.3127, G loss: 30.0823
Epoch 67: Train D loss: -4.0552, G loss: 30.3088
Epoch 68: Train D loss: -6.3827, G loss: 31.5992
Epoch 69: Train D loss: -5.6003, G loss: 29.4292

Epoch 70: Train D loss: -4.1997, G loss: 30.0727
Epoch 71: Train D loss: -5.8092, G loss: 30.8308
Epoch 72: Train D loss: -4.9789, G loss: 30.4201
Epoch 73: Train D loss: -5.5702, G loss: 29.7956
Epoch 74: Train D loss: -5.6242, G loss: 30.8499
Epoch 75: Train D loss: -5.7687, G loss: 30.4139
Epoch 76: Train D loss: -5.8114, G loss: 31.2613
Epoch 77: Train D loss: -5.3761, G loss: 33.1969
Epoch 78: Train D loss: -5.1805, G loss: 30.5959
Epoch 79: Train D loss: -6.2201, G loss: 31.0029

Epoch 80: Train D loss: -5.7552, G loss: 32.1509
Epoch 81: Train D loss: -5.8629, G loss: 32.2645
Epoch 82: Train D loss: -5.2672, G loss: 32.6459
Epoch 83: Train D loss: -5.8282, G loss: 32.8028
Epoch 84: Train D loss: -5.6751, G loss: 33.5329
Epoch 85: Train D loss: -6.9970, G loss: 31.5345
Epoch 86: Train D loss: -5.6810, G loss: 35.0075
Epoch 87: Train D loss: -5.4079, G loss: 32.0348
Epoch 88: Train D loss: -5.2579, G loss: 35.2726
Epoch 89: Train D loss: -6.2694, G loss: 35.7263

Epoch 90: Train D loss: -6.3974, G loss: 32.7170
Epoch 91: Train D loss: -5.0408, G loss: 34.1664
Epoch 92: Train D loss: -4.2319, G loss: 33.5481
Epoch 93: Train D loss: -5.6979, G loss: 36.6392
Epoch 94: Train D loss: -6.5240, G loss: 36.3997
Epoch 95: Train D loss: -5.9683, G loss: 35.2316
Epoch 96: Train D loss: -5.6618, G loss: 33.6316
Epoch 97: Train D loss: -3.2506, G loss: 32.7353
Epoch 98: Train D loss: -5.7600, G loss: 35.7527
Epoch 99: Train D loss: -6.6229, G loss: 37.2489

Epoch 100: Train D loss: -6.5922, G loss: 35.5876
Epoch 101: Train D loss: -6.0123, G loss: 36.2869
Epoch 102: Train D loss: -3.2192, G loss: 33.6614
Epoch 103: Train D loss: -5.2284, G loss: 37.9285
Epoch 104: Train D loss: -6.4634, G loss: 37.4508
Epoch 105: Train D loss: -6.4344, G loss: 37.0513
Epoch 106: Train D loss: -7.4529, G loss: 38.2233
Epoch 107: Train D loss: -5.5016, G loss: 38.7531
Epoch 108: Train D loss: -6.2586, G loss: 37.8168
Epoch 109: Train D loss: -6.0397, G loss: 39.3098

Epoch 110: Train D loss: -4.5642, G loss: 37.6431
Epoch 111: Train D loss: -5.3193, G loss: 35.2512
Epoch 112: Train D loss: -6.1000, G loss: 40.2987
Epoch 113: Train D loss: -6.5263, G loss: 38.3639
Epoch 114: Train D loss: -7.5932, G loss: 37.9854
Epoch 115: Train D loss: -7.1614, G loss: 39.9800
Epoch 116: Train D loss: -7.0278, G loss: 39.9854
Epoch 117: Train D loss: -6.7715, G loss: 38.8318
Epoch 118: Train D loss: -1.1647, G loss: 38.7853
Epoch 119: Train D loss: -3.2441, G loss: 33.7709

Epoch 120: Train D loss: -4.3060, G loss: 39.5775
Epoch 121: Train D loss: -4.5076, G loss: 39.6620
Epoch 122: Train D loss: -6.2401, G loss: 41.3046
Epoch 123: Train D loss: -5.5641, G loss: 37.7670
Epoch 124: Train D loss: -6.1229, G loss: 41.1236
Epoch 125: Train D loss: -6.5736, G loss: 38.8667
Epoch 126: Train D loss: -4.7117, G loss: 39.2155
Epoch 127: Train D loss: -2.0358, G loss: 35.2175
Epoch 128: Train D loss: -3.6457, G loss: 39.2292
Epoch 129: Train D loss: -5.1131, G loss: 38.1220

Epoch 130: Train D loss: -5.2960, G loss: 40.9473
Epoch 131: Train D loss: -6.0590, G loss: 39.2857
Epoch 132: Train D loss: -5.8026, G loss: 39.0368
Epoch 133: Train D loss: -3.9538, G loss: 40.0107
Epoch 134: Train D loss: -5.5867, G loss: 39.1015
Epoch 135: Train D loss: -6.4062, G loss: 41.5218
Epoch 136: Train D loss: -6.3482, G loss: 42.0300
Epoch 137: Train D loss: -6.8324, G loss: 40.0019
Epoch 138: Train D loss: -5.9646, G loss: 40.5305
Epoch 139: Train D loss: -6.2529, G loss: 40.7059

Epoch 140: Train D loss: -7.0281, G loss: 41.6095
Epoch 141: Train D loss: -5.6044, G loss: 41.0566
Epoch 142: Train D loss: -6.6347, G loss: 42.8144
Epoch 143: Train D loss: -6.3203, G loss: 39.1255
Epoch 144: Train D loss: -7.2837, G loss: 42.4573
Epoch 145: Train D loss: -6.6416, G loss: 41.1581
Epoch 146: Train D loss: -5.5539, G loss: 39.3411
Epoch 147: Train D loss: -1.5165, G loss: 37.6084
Epoch 148: Train D loss: -2.2434, G loss: 39.4644
Epoch 149: Train D loss: -4.0301, G loss: 37.9678

Epoch 150: Train D loss: -5.0795, G loss: 40.9997
Epoch 151: Train D loss: -5.4891, G loss: 42.1863
Epoch 152: Train D loss: -5.4200, G loss: 38.6378
Epoch 153: Train D loss: -5.9140, G loss: 42.6217
Epoch 154: Train D loss: -6.4914, G loss: 40.0770
Epoch 155: Train D loss: -4.3930, G loss: 39.7363
Epoch 156: Train D loss: -5.4038, G loss: 41.1785
Epoch 157: Train D loss: -6.1170, G loss: 42.2868
Epoch 158: Train D loss: -6.7845, G loss: 40.8755
Epoch 159: Train D loss: -5.5417, G loss: 42.5627

Epoch 160: Train D loss: -6.9687, G loss: 39.8278
Epoch 161: Train D loss: -2.3769, G loss: 41.6123
Epoch 162: Train D loss: -4.9251, G loss: 36.9833
Epoch 163: Train D loss: -5.6323, G loss: 42.5913
Epoch 164: Train D loss: -6.1550, G loss: 41.6735
Epoch 165: Train D loss: -6.5711, G loss: 44.5239
Epoch 166: Train D loss: -6.3757, G loss: 43.1277
Epoch 167: Train D loss: -6.1421, G loss: 42.5045
Epoch 168: Train D loss: -3.1840, G loss: 40.6879
Epoch 169: Train D loss: -0.8210, G loss: 38.2161

Epoch 170: Train D loss: -2.1098, G loss: 37.2772
Epoch 171: Train D loss: -2.6259, G loss: 41.1633
Epoch 172: Train D loss: -3.0488, G loss: 42.6150
Epoch 173: Train D loss: -3.5877, G loss: 39.7646
Epoch 174: Train D loss: -3.7149, G loss: 42.2314
Epoch 175: Train D loss: -4.5943, G loss: 41.8063
Epoch 176: Train D loss: -4.6582, G loss: 43.4932
Epoch 177: Train D loss: -5.3836, G loss: 44.0866
Epoch 178: Train D loss: -5.3680, G loss: 44.7280
Epoch 179: Train D loss: -5.7290, G loss: 45.0711

Epoch 180: Train D loss: -5.7135, G loss: 44.7881
Epoch 181: Train D loss: -6.3135, G loss: 45.8640
Epoch 182: Train D loss: -6.0633, G loss: 45.2252
Epoch 183: Train D loss: -6.3735, G loss: 45.4457
Epoch 184: Train D loss: -6.0490, G loss: 44.6393
Epoch 185: Train D loss: -5.7547, G loss: 44.8938
Epoch 186: Train D loss: -6.5404, G loss: 46.9443
Epoch 187: Train D loss: -6.1654, G loss: 42.0256
Epoch 188: Train D loss: -1.9355, G loss: 45.1646
Epoch 189: Train D loss: -2.0336, G loss: 43.6953

Epoch 190: Train D loss: -3.4773, G loss: 40.8780
Epoch 191: Train D loss: -3.8717, G loss: 46.3085
Epoch 192: Train D loss: -4.7110, G loss: 44.4004
Epoch 193: Train D loss: -5.5165, G loss: 45.3767
Epoch 194: Train D loss: -5.7456, G loss: 46.7572
Epoch 195: Train D loss: -6.0092, G loss: 46.0361
Epoch 196: Train D loss: -6.3716, G loss: 45.2187
Epoch 197: Train D loss: -5.2655, G loss: 47.5324
Epoch 198: Train D loss: -1.8786, G loss: 40.9588
Epoch 199: Train D loss: -2.6374, G loss: 43.7142

Epoch 200: Train D loss: -3.9507, G loss: 47.2080
Epoch 201: Train D loss: -5.3370, G loss: 46.3652
Epoch 202: Train D loss: -5.8424, G loss: 47.8687
Epoch 203: Train D loss: -6.4526, G loss: 48.3924
Epoch 204: Train D loss: -6.5707, G loss: 44.9869
Epoch 205: Train D loss: -5.3993, G loss: 48.6121
Epoch 206: Train D loss: -6.3137, G loss: 45.9902
Epoch 207: Train D loss: -7.0424, G loss: 47.2055
Epoch 208: Train D loss: -6.4492, G loss: 47.7563
Epoch 209: Train D loss: -5.4807, G loss: 48.4502

Epoch 210: Train D loss: -5.0339, G loss: 44.3198
Epoch 211: Train D loss: -6.2150, G loss: 46.6903
Epoch 212: Train D loss: -6.3287, G loss: 46.9994
Epoch 213: Train D loss: -6.7592, G loss: 47.2867
Epoch 214: Train D loss: -7.4126, G loss: 46.2384
Epoch 215: Train D loss: -5.6615, G loss: 49.0545
Epoch 216: Train D loss: -6.7217, G loss: 45.1940
Epoch 217: Train D loss: -5.8682, G loss: 47.2322
Epoch 218: Train D loss: -6.3973, G loss: 46.7623
Epoch 219: Train D loss: -6.6139, G loss: 47.7322

Epoch 220: Train D loss: -7.0146, G loss: 45.4965
Epoch 221: Train D loss: -6.6389, G loss: 45.4445
Epoch 222: Train D loss: -2.4586, G loss: 43.4702
Epoch 223: Train D loss: -1.1414, G loss: 44.9890
Epoch 224: Train D loss: -2.4444, G loss: 43.5652
Epoch 225: Train D loss: -2.7467, G loss: 43.7222
Epoch 226: Train D loss: -3.2374, G loss: 46.9822
Epoch 227: Train D loss: -3.9496, G loss: 46.7482
Epoch 228: Train D loss: -4.6132, G loss: 47.3437
Epoch 229: Train D loss: -4.8289, G loss: 47.6797

Epoch 230: Train D loss: -5.4711, G loss: 49.1251
Epoch 231: Train D loss: -6.1587, G loss: 47.9491
Epoch 232: Train D loss: -5.5119, G loss: 49.3913
Epoch 233: Train D loss: -6.3556, G loss: 47.5487
Epoch 234: Train D loss: -6.8856, G loss: 48.7144
Epoch 235: Train D loss: -5.9857, G loss: 49.2538
Epoch 236: Train D loss: -6.8362, G loss: 47.1008
Epoch 237: Train D loss: -5.6286, G loss: 46.5493
Epoch 238: Train D loss: -6.1916, G loss: 48.5800
Epoch 239: Train D loss: -6.5881, G loss: 49.2366

Epoch 240: Train D loss: -6.6438, G loss: 48.5610
Epoch 241: Train D loss: -6.7195, G loss: 49.8252
Epoch 242: Train D loss: -4.6745, G loss: 47.2077
Epoch 243: Train D loss: -3.4491, G loss: 48.0188
Epoch 244: Train D loss: -5.6447, G loss: 47.0904
Epoch 245: Train D loss: -6.9580, G loss: 49.3273
Epoch 246: Train D loss: -6.0463, G loss: 49.4691
Epoch 247: Train D loss: -6.7861, G loss: 47.1646
Epoch 248: Train D loss: -6.3455, G loss: 49.8253
Epoch 249: Train D loss: -7.3601, G loss: 47.9653

Epoch 250: Train D loss: -6.5575, G loss: 48.3013
Epoch 251: Train D loss: -7.5467, G loss: 49.5057
Epoch 252: Train D loss: -6.5534, G loss: 46.6462
Epoch 253: Train D loss: -7.2461, G loss: 48.7580
Epoch 254: Train D loss: -7.1704, G loss: 50.4618
Epoch 255: Train D loss: -0.9368, G loss: 44.2950
Epoch 256: Train D loss: 0.5536, G loss: 42.4258
Epoch 257: Train D loss: -0.1443, G loss: 39.6144
Epoch 258: Train D loss: -0.3723, G loss: 39.5237
Epoch 259: Train D loss: -0.5579, G loss: 37.6958

Epoch 260: Train D loss: -0.8124, G loss: 42.8023
Epoch 261: Train D loss: -0.7153, G loss: 39.1099
Epoch 262: Train D loss: -0.7839, G loss: 40.3950
Epoch 263: Train D loss: -1.2679, G loss: 37.3640
Epoch 264: Train D loss: -1.0900, G loss: 40.6134
Epoch 265: Train D loss: -1.5458, G loss: 39.8020
Epoch 266: Train D loss: -1.7776, G loss: 41.4939
Epoch 267: Train D loss: -1.8051, G loss: 40.2531
Epoch 268: Train D loss: -2.2179, G loss: 39.6956
Epoch 269: Train D loss: -2.2293, G loss: 41.5303

Epoch 270: Train D loss: -2.6403, G loss: 41.1036
Epoch 271: Train D loss: -2.7642, G loss: 43.1376
Epoch 272: Train D loss: -3.4206, G loss: 41.1782
Epoch 273: Train D loss: -3.3905, G loss: 44.7823
Epoch 274: Train D loss: -4.0353, G loss: 43.9382
Epoch 275: Train D loss: -4.0545, G loss: 44.9949
Epoch 276: Train D loss: -4.5836, G loss: 45.0697
Epoch 277: Train D loss: -4.4768, G loss: 46.5310
Epoch 278: Train D loss: -4.8714, G loss: 45.7383
Epoch 279: Train D loss: -5.0645, G loss: 46.2272

Epoch 280: Train D loss: -5.3110, G loss: 45.2193
Epoch 281: Train D loss: -5.3459, G loss: 46.8995
Epoch 282: Train D loss: -5.4012, G loss: 45.6606
Epoch 283: Train D loss: -5.6629, G loss: 47.7304
Epoch 284: Train D loss: -6.0067, G loss: 47.8233
Epoch 285: Train D loss: -5.9803, G loss: 45.2547
Epoch 286: Train D loss: -5.6341, G loss: 48.4564
Epoch 287: Train D loss: -6.2482, G loss: 47.1421
Epoch 288: Train D loss: -5.5349, G loss: 46.8103
Epoch 289: Train D loss: -6.0081, G loss: 47.4786

Epoch 290: Train D loss: -6.1895, G loss: 49.2255
Epoch 291: Train D loss: -5.8228, G loss: 46.5874
Epoch 292: Train D loss: -6.7193, G loss: 50.4547
Epoch 293: Train D loss: -6.9497, G loss: 49.2031
Epoch 294: Train D loss: -6.4045, G loss: 49.5813
Epoch 295: Train D loss: -6.5181, G loss: 49.3917
Epoch 296: Train D loss: -5.3349, G loss: 49.1568
Epoch 297: Train D loss: -6.2215, G loss: 48.8781
Epoch 298: Train D loss: -6.0418, G loss: 50.5765
Epoch 299: Train D loss: -5.4949, G loss: 49.0278

同理,观察loss曲线和D上的参数分布.

loss_plot(d_loss_hist, g_loss_hist)

show_d_params(D)

作业:

观察WGAN和WGAN-GP生成器生成的图片效果,它们在相同epoch时生成的图片效果(或者说生成图片达到效果所需要epoch数量),它们的loss曲线以及D的参数分布,说说有什么不同?

:

  1. WGAN-GP converges faster under the same epoch;
  2. Loss curve of WGAN have a more stable but slower convergence, while the convergence of WGAN-GP is faster but still fluctuates after convergence;
  3. WGAN adopts the weight pruning strategy to forcibly satisfy that the gradient of each point within the defined domain is constant, which leads to that the parameters in the training process will always be truncated after updating.
  4. WGAN-GP adopts the strategy of gradient penalty, which guarantees that the L2 norm of the gradient relative to the original input should be bound, solving the problem of the explosion of the disappearing gradient of the training gradient.