Member-only story

Overfitting is not always a problem

On how to actively overfit your models and get benefits out of it.

Santiago Valdarrama
2 min readMar 26, 2021
Somebody one day compared overfitting with a shark waiting in the shadows… 😦

Of course, there is more than one way to build a good model. I’ve seen, however, how easy it is to make time disappear when we spend too much of it looking into the wrong rabbit holes.

Over the years, I’ve built my own rudimentary set of steps that I always follow when starting a new project. Some of these have been recommendations from people that came before, some I’ve found after banging my head more than once.

Today, let’s focus on one of them: I want you to stop being scared about overfitting, embrace it, and — what’s even better — actively start looking for it.

I was taught that overfitting was a bad thing.

If you are still figuring out the vocabulary, a model that’s overfitting spits out predictions that it memorized. This means it didn’t learn properly.

However, there’s something good about this: if our model can memorize the data, we know that it has enough capacity, and we don’t have any weird issues with the learning process.

And that’s a place where we want to be!

So before we go crazy and throw the kitchen sink at our problem, we will exploit overfitting…

--

--

Santiago Valdarrama
Santiago Valdarrama

Written by Santiago Valdarrama

I build machine learning systems until 5 pm. Then I come here and tell you stories about them.

Responses (1)