Need help with tensorflow_stacked_denoising_autoencoder?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

wblgers
163 Stars 74 Forks 8 Commits 3 Opened issues

Description

Implementation of the stacked denoising autoencoder in Tensorflow

Services available

!
?

Need anything else?

Contributors list

# 229,116
Python
Tensorf...
8 commits

tensorflowstackeddenoising_autoencoder

0. Setup Environment

To run the script, at least following required packages should be satisfied: - Python 3.5.2 - Tensorflow 1.6.0 - NumPy 1.14.1

You can use Anaconda to install these required packages. For tensorflow, use the following command to make a quick installation under windows:

pip install tensorflow

1. Content

In this project, there are implementations for various kinds of autoencoders. The base python class is library/Autoencoder.py, you can set the value of "ae_para" in the construction function of Autoencoder to appoint corresponding autoencoder.

  • aepara[0]: The corruption level for the input of autoencoder. If aepara[0]>0, it's a denoising autoencoder;
  • awpara[1]: The coeff for sparse regularization. If aepara[1]>0, it's a sparse autoencoder. #### 1.1 autoencoder Follow the code sample below to construct a autoencoder: ``` corruptionlevel = 0 sparsereg = 0

ninputs = 784 nhidden = 400 n_outputs = 10 lr = 0.001

define the autoencoder

ae = Autoencoder(nlayers=[ninputs, nhidden], transferfunction = tf.nn.relu, optimizer = tf.train.AdamOptimizer(learningrate = lr), aepara = [corruptionlevel, sparsereg]) ``` To visualize the extracted features and images, check the code in visualize_ae.py.reconstructed - Extracted features on MNIST:

Alt text - Reconstructed noisy images after input->encoder->decoder pipeline:

Alt text

1.2 denoising autoencoder

Follow the code sample below to construct a denoising autoencoder: ``` corruptionlevel = 0.3 sparsereg = 0

ninputs = 784 nhidden = 400 n_outputs = 10 lr = 0.001

define the autoencoder

ae = Autoencoder(nlayers=[ninputs, nhidden], transferfunction = tf.nn.relu, optimizer = tf.train.AdamOptimizer(learningrate = lr), aepara = [corruptionlevel, sparsereg]) ```

Test results: - Extracted features on MNIST:

Alt text - Reconstructed noisy images after input->encoder->decoder pipeline:

Alt text

1.3 sparse autoencoder

Follow the code sample below to construct a sparse autoencoder: ``` corruptionlevel = 0 sparsereg = 2

ninputs = 784 nhidden = 400 n_outputs = 10 lr = 0.001

define the autoencoder

ae = Autoencoder(nlayers=[ninputs, nhidden], transferfunction = tf.nn.relu, optimizer = tf.train.AdamOptimizer(learningrate = lr), aepara = [corruptionlevel, sparsereg]) ```

1.4 stacked (denoising) autoencoder

For stacked autoencoder, there are more than one autoencoder in this network, in the script of "SAESoftmaxMNIST.py", I defined two autoencoders: ``` corruptionlevel = 0.3 sparsereg = 0

ninputs = 784 nhidden = 400 nhidden2 = 100 noutputs = 10 lr = 0.001

define the autoencoder

ae = Autoencoder(nlayers=[ninputs, nhidden], transferfunction = tf.nn.relu, optimizer = tf.train.AdamOptimizer(learningrate = lr), aepara = [corruptionlevel, sparsereg]) ae2nd = Autoencoder(nlayers=[nhidden, nhidden2], transferfunction = tf.nn.relu, optimizer = tf.train.AdamOptimizer(learningrate = lr), aepara=[corruptionlevel, sparse_reg]) ``` For the training of SAE on the task of MNIST classification, there are four sequential parts: 1. Training of the first autoencoder; 2. Training of the second autoencoder, based on the output of first ae; 3. Training on the output layer, normally softmax layer, based on the sequential output of first and second ae; 4. Fine-tune on the whole network.

Detailed code can be found in the script "SAESoftmaxMNIST.py"

2. Reference

Class "autoencoder" are based on the tensorflow official models: https://github.com/tensorflow/models/tree/master/research/autoencoder/autoencoder_models

For the theory on autoencoder, sparse autoencoder, please refer to: http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/

3. My blog for this project

漫谈autoencoder:降噪自编码器/稀疏自编码器/栈式自编码器(含tensorflow实现)

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.