Variational AutoEncoder
Autoencoder
- Neural network with unsupervised machine-learning algorithm apply back-prop to set target value to the input
- Auto-encoder prefers over PCA because it can learn non-linear transformations with non-linear activation functions. more efficient to learn several layer with auto-encoder then one huge transformation with PCA.
Autoencoder Applications
- Image coloring (Black-white images -> colored)
- Feature variation (Extract required feature)
- Dimensionality Reduction
- Denosing image (Remove Noise)
- Remove watermark
Autoencoder Architecture
- Encoder : part of NN compress the input into latent space representation
- code : part of NN represents compressed input
- Decoder : Decode the encoded data to original dimension
Properties of Autoencoder
- Data-specific: Autoencoders are only able to meaningfully compress data similar to what they have been trained on.
- Lossy: de-compressed output will be degrad compared to the original input
- Unsupervised: Autoencoders are considered an unsupervised learning technique since they don’t need explicit labels to train on. But to be more precise they are self-supervised because they generate their own labels from the training data.
Types of Autoencoder
- Denoising autoencoder.
- Sparse Autoencoder.
- Deep Autoencoder.
- Contractive Autoencoder.
- Undercomplete Autoencoder.
- Convolutional Autoencoder.
- Variational Autoencoder.