Implementing Variational Autoencoder and explored the importance of each part of its loss function.
This notbook is another look into deep learning, this time into Variational-Autoencoders(or VAE in short).VAE is an autoencoder whose encodings distribution is regularised during the training in order to ensure that its latent space has good properties allowing us to generate new images. The VAE in our case while using the MINST data set will generate new number images using the latent variblies learned in the training fase.
The VariationalAutoencoder is defined containg decoder and encoder using reparametrization trick. The analysis is compased of there main part:
- Evaluting the effect of on the output by sampling new images from problastic decoder with noisy expectation.
- Evaluting the effect of by a performing a similar trial, but this time multipling the std by 0.1,1,10,100
- Looking at the part of each part of the loss function of VAE which is : . This is executed by training only one part at a time.
After donwloading the file on colab. Press on file -> open nootbook -> Upload and then drop to downloaded file.
Now you can run the whole nootbok or specfic cells.
I will use google as an example but similar procces can be prefomred on other nootbook editors