Member-only story
A long time ago, I fell in love with autoencoders.
Autoencoders made me fall in love with machine learning early on.
Hey, if they got me, maybe this is the gateway that will make you more involved in the field!
Autoencoders are data compression algorithms built using neural networks. A network encodes (compresses) the original input into an intermediate representation, and another network reverses the process to get the same input back.
The encoding process generalizes the input data. I like to think of it as the Summarizer in Chief of the network: its entire job is to represent the entire dataset as compactly as possible, so the decoder can do a decent job at reproducing the original data back.
This encoding and decoding process is lossy, which means we will lose some details from the original input.
But how are these autoencoders useful?
Any uncommon characteristics in the input data will never make it past the autoencoder’s bottleneck. The encoder is very picky about the features it encodes, so it’ll never select anything that’s not representative of the whole dataset. This is a great…