Variational Auto-Encoder Example
Build a variational auto-encoder (VAE) to generate digit images from a noise distribution with TensorFlow.
- Author: Aymeric Damien
- Project: https://github.com/aymericdamien/TensorFlow-Examples/

VAE Overview
References:
- Auto-Encoding Variational Bayes The International Conference on Learning Representations (ICLR), Banff, 2014. D.P. Kingma, M. Welling
- Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio. Aistats 9, 249-256
Other tutorials:
- Variational Auto Encoder Explained. Kevin Frans.
MNIST Dataset Overview
This example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).
from __future__ import division, print_function, absolute_import
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import tensorflow as tf
Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
Parameters
learning_rate = 0.001
num_steps = 30000
batch_size = 64
# Network Parameters
image_dim = 784 # MNIST images are 28x28 pixels
hidden_dim = 512
latent_dim = 2
# A custom initialization (see Xavier Glorot init)
def glorot_init(shape):
return tf.random_normal(shape=shape, stddev=1. / tf.sqrt(shape[0] / 2.))
Variables
weights = {
'encoder_h1': tf.Variable(glorot_init([image_dim, hidden_dim])),
'z_mean':

本教程通过构建变分自编码器(VAE),利用TensorFlow从噪声分布中生成MNIST手写数字图像。该模型包含编码器和解码器,能够学习数据的潜在表示并从中采样生成新的图像。

最低0.47元/天 解锁文章
1251

被折叠的 条评论
为什么被折叠?



