Member-only story

A long time ago, I fell in love with autoencoders.

Santiago Valdarrama
3 min readMar 13, 2021

--

A banana. A ripe banana 🍌, to be more precise, courtesy of unsplash.com.

Autoencoders made me fall in love with machine learning early on.

Hey, if they got me, maybe this is the gateway that will make you more involved in the field!

Autoencoders are data compression algorithms built using neural networks. A network encodes (compresses) the original input into an intermediate representation, and another network reverses the process to get the same input back.

The encoding process generalizes the input data. I like to think of it as the Summarizer in Chief of the network: its entire job is to represent the entire dataset as compactly as possible, so the decoder can do a decent job at reproducing the original data back.

This encoding and decoding process is lossy, which means we will lose some details from the original input.

My attempt at drawing the structure of a basic autoencoder.

But how are these autoencoders useful?

Any uncommon characteristics in the input data will never make it past the autoencoder’s bottleneck. The encoder is very picky about the features it encodes, so it’ll never select anything that’s not representative of the whole dataset. This is a great…

--

--

Santiago Valdarrama
Santiago Valdarrama

Written by Santiago Valdarrama

I build machine learning systems until 5 pm. Then I come here and tell you stories about them.

Responses (1)