I didn’t pay too much attention when I heard about CLIP, a new neural network that learns visual concepts from natural language supervision. At some point, however, my social media feed was all about it. I mostly have to blame this guy. I had to look into it 👀 .
It immediately felt like Christmas in March (CLIP was released in January, but I was late to the party.) Here you had this new technique that kicked everyone’s butts with a zero-shot approach!
Let’s try to unpack this a little bit.
If your model can predict classes that you didn’t…
Two weeks later and it still felt we were running in circles. Was there anything else to try? “Seriously, we’ve been at this for a long time already!” somebody shouted.
There’s so much we can’t do. Saying that twenty percent is embarrassing may be the understatement of the century. “How do we explain this?”
We needed a breakthrough, and we needed it quick, right when our well of ideas was running out faster than our funding.
You’ve read the articles and watched the movies. Artificial intelligence single-handedly turning industries on their belly, changing lives. …
Do you know what scares me? Having to go through a mountain of data to come up with labels.
Data labeling is hard, expensive, and sometimes outright prohibitive. Data labeling can kill your machine learning project even before it starts.
Let’s kick this off with a hypothetical problem: we’d like to build a model capable of visually inspecting photos of circuit boards and classify them based on their specific configuration.
Imagine a factory producing thousands of these boards per minute. Going through each circuit board manually would be a nightmare and slow down production significantly. …
I never cared much about machine learning.
If we were playing the blame game, I’d certainly point to the “math is not my thing” excuse. I had seen it with my own eyes, and it seemed daunting.
Back then, we had to write training loops from scratch, beg large universities for cluster time, and deal with parallel libraries and remote debugging.
That was a long time ago.
Fast forward a few years, and I came around and gave it a try. To my surprise, I was more than ready to get into it!
The field had changed. The math I…
I’m the first one excited about the potential for Artificial Intelligence and specifically Machine Learning to change the world we currently live in.
Look around, and you’ll see how things are changing at a neck-breaking pace! Every single day, we are using machine learning to power more and more of our lives.
But as much as I love all of this progress, it doesn’t come for free. Implementing machine learning comes with immense challenges that have the potential to reshape our society in unintended ways.
Understanding the source of these problems is the first step towards finding systematic solutions that…
I get it.
Creating a good machine learning model is really sexy. That’s what’s different and where everyone focuses all of their attention.
But machine learning is much more than that.
Yes, machine learning engineers spend a lot of time designing and training new models, but this is just a small fraction of their job.
In reality, dealing with data and operationalizing models is much more time-consuming and sometimes even harder and more involved than creating the models in the first place.
The ultimate goal of any project is to provide value, and a model is just a piece of…
Beyond the title’s cuteness, I’ve been exploring this idea for quite some time now: the number of examples to train a neural network is an essential tool we can use to influence the training process.
In machine learning jargon, we call this the “batch size.” A batch is nothing else than a group of examples packed together in an array-like structure.
Let’s talk about how things work.
First, a little bit of context
We can’t talk shop without focusing for a quick second on how the training process works. …
Finding whether your machine learning model is providing any value is not that simple. Yeah, of course, the loss is going down, and accuracy is through the roof, but that’s not enough.
Is this thing actually any good?
Yes, I’m one of those who had bragged before about a model that did worse than a pair of nested if-else conditions. Focus too much on the trees 🌳 , and you’ll certainly miss the forest.
Let’s get that fixed.
I’ve been building traditional software my entire life, and there’s something nice about it: it either works, or it doesn’t.
Movies make up a lot of shit all the time, so I hesitated to use them as a good example here, but I couldn’t find a better introduction, so let’s go ahead with this one.
Try to remember one of those scenes where Mr. Detective searches on a computer a partial photo of a nobody from a CCTV camera trying to find its identity. The computer goes, picture by picture, through its database until it finds a match.
Is this even possible? How can they compare pictures like that? 🤯
Alright, we know we can’t simply compare pixels from two…
Of course, there is more than one way to build a good model. I’ve seen, however, how easy it is to make time disappear when we spend too much of it looking into the wrong rabbit holes.
Over the years, I’ve built my own rudimentary set of steps that I always follow when starting a new project. Some of these have been recommendations from people that came before, some I’ve found after banging my head more than once.
Today, let’s focus on one of them: I want you to stop being scared about overfitting, embrace it, and — what’s even…
I build machine learning systems until 5 pm. Then I come here and tell you stories about them.