Common knowledge is an interesting phenomenon that is constantly changing, so what’s common today might not be so common tomorrow. 

Think about how the common knowledge of 1860 compares to the common knowledge of today (2020)… By even mentioning the concept of a smartphone, the internet, autonomous cars, or even birth control people would think your either a genius or crazy (most likely the latter). 

The beautiful thing about this ever-evolving knowledge is that it drastically improves the way we live and interact. When I think about common knowledge, I think about ideas, technology, practices, etc. that are easily understood by the majority of people… For example, how to use a smartphone, how to Google something, or what the internet is/does. 

This baseline understanding is born out of a few things… Simple to understand ideas, as well as easily accessible and usable technology.

If I asked you to create a website, you might struggle a little, but after watching a few YouTube videos and finding an easy drag/drop website builder (e.g. Squarespace, WordPress, Wix, etc.)… You would eventually create something that looks like a website in a reasonable time. That’s thanks to websites being common knowledge and the tools becoming easier to access and use. 

But now… What if I asked you to create a machine that could predict what movies I might enjoy watching based on my Netflix history? Most likely you would either tell me to f*** off or you would go pay a room full of highly educated data scientists, engineers, and software developers to make it. The ability to create anything that looks at all like AI is hard, but that’s because the ideas behind this technology are complicated and the technology is not very user-friendly or easy to access. 

The democratization of the internet and the technology behind it has fundamentally changed the world we live in today, but my question after this week’s wandering is… What happens when democratizing the ability to create machines that can “think”? … That’s when the game is really going to change. 

This week we’ve wandered into the world of machine learning, specifically the automation and simplification of machine learning (e.g. AuotML). 

What’s behind a thinking machine? 

There are three big things we’ll need to do here… 

  1. GATHER → Get some data
  2. BUILD → Build a model
  3. DEPLOY → Put that model out into the world

The picture below shows a bit more detail, but in a nutshell, these three steps are good enough for us. 

A machine learning guru (ML Guru) would have to go through all of these steps multiple times to create just the right machine that “thinks” the way they want it to. Plus, that guru most likely has spent years figuring out the math, technical details, and mental heuristics to actually go through this entire process. 

Today, most ML Gurus spend the majority of their time playing with their data (80%)… Gathering it, cleaning it, analyzing it, and manipulating it to make sure it fits perfectly into their “thinking” machines. Sadly, most (93%) ML gurus say this is the least enjoyable part of their jobs. 

The interesting thing here is that when thinking about ML gurus we think of their job as a person who should be pulling insights from data, but this is a very small part of their time (11%). 

Remember the “AutoML” thing I mentioned earlier? Well… That’s the potential game-changer here. Instead of needing an ML guru with years of experience and knowledge, we’re approaching a place where anyone with the patience and interest in learning a few basic tools can create their own “thinking” machine. 

Automating the impossible

The idea around automating machine learning can turn SUPER sci-fi and meta when looking into the far future, but before we get there let’s set our expectations in reality. 

The tools today for automating machine learning are pretty limited and early in their stages of development. When looking at the three buckets I mentioned above (gather, build, deploy) current AutoML tools are mainly focused on the “build” part of this process. 

Some companies mention they’re already tackling the entire “end-to-end” (industry lingo) machine learning process, but I’ve not come across any ML gurus agreeing with this. 

Even though this is a small subset of the entire process of creating a thinking machine, it’s still ridiculously impressive what’s being done today. Most of the leading companies in this AutoML space are actually choosing which model best fits the data fed into it… Let me explain… 

Netflix… Most of us have heard of or used Netflix in the past. When using Netflix I’m sure you’ve surprised yourself by how much time you’re able to waste watching endless shows and movies because the content is so good. Well, that’s due to Netflix slowly learning your preferences and building out the perfect content catalog for you. Let’s break this down… Based on the previous shows and movies you’ve watched, Netflix can offer similar content that you might enjoy as well based on that history. 

That’s exactly what’s happening with these AutoML tools. When building a “thinking” machine the main goal is to find a model that works best with your data… It’s like a matchmaking game similar to Tinder for “data” and “models” (not the runway kind). Historically this matchmaking was done through trial and error, with ML gurus using previous experience and heuristics to guess which model would best fit their data (see more here). Today these AutoML tools are learning what models best fit certain types of data by running through thousands of iterations, instead of relying on us faulty humans guessing.  

Outside of choosing which model is best for your data, these AutoML tools are making suggestions on how exactly those models should be shaped and any additional pieces of data that might be useful. 

AutoML of Today

There are a few different companies offering this kind of automation, but this space is new, which means any kind of list is constantly changing. Below is a helpful chart showing the different AutoML players today and their current focus. 

As you might expect a group of ML gurus got together and compared the top four full pipeline AutoML tools to see who was the “best”…  Best has quotes because it’s pretty subjective, but still some useful research. In the end, they decided that “Auto-sklearn” performs best on classification (e.g. black or white, true or false, etc.) datasets and “TPOT” performs best on regression (predicting future outcomes) datasets.

If you’re interested here are a few different lists of AutoML tools I could find… 

 AutoML of tomorrow

I mentioned earlier that this whole AutoML thing can get SciFi fast and that’s true… 

An analogy from Randy Olson sums this up really well…

  • Computer programming is focused on automating repeatable tasks
  • Machine learning is focused on automating the automation of repeatable tasks
  • Meta-learning is focused on automating the automation of automation
    • i.e., enabling the machine to learn how to learn in the best way possible

It’s turtles all the way down! Haha! 

The idea above is that we’re able to basically remove the human from the loop completely, having “thinking” machines create and constantly improve other “thinking” machines. Now, this world is far away (at least I think…), but the democratization and benefits felt throughout the planet are much closer. 

As I mentioned before… 

Democratizing the internet and its underlying technology has changed the world we live in today. The amount of change has been massive, but it’s only going to increase with the democratization of our ability to use and build “thinking” machines. 

I’m not sure about you, but I’m extremely excited by the developments in the world of AutoML and how the barrier of entry has been lowered for us, non-ML gurus. 

Until next time my fellow wanderers! 🙂