The mitigation of manual labor through automation has always been a goal, especially since the dawn of machines with the industrial revolution. While the term automation was coined in the 1940’s as it related to motor vehicle assembly, today the term has another meaning. Automation of data science/predictive modeling/machine learning is set to usher in a new era inĀ  data science.

A machine learning (ML) project can be thought of as a sequence of decisions. These decisions are based on a finite set of considerations. Collecting and formatting the data, imputing missing values, deciding the independent and dependent variables, etc. Most ML projects follow a similar execution blue-print, which in turn makes the whole process feasible for automation.

Recognizing this, several companies including the machine learning giants Google and Microsoft have entered the fray each with their own automated machine learning offerings. We have Cloud AutoML from Google, and Azure AutoML from Microsoft. I provide a brief overview of Microsoft’s Azure AutoML here.

So what do I think of AutoML? I have always thought of it as an inevitability. I believed in the value of this approach so much that I built my own such application many years ago that would search through different methods and then rank the models by some selected performance metric. I caveat quickly that my effort was not nearly as smooth and pleasant as my experience with today’s polished AutoML.

A high quality predictive model can be constructed by a careful orchestration of if-then-else statements that take into account completeness of data, the metric a user wants to optimize, and the resources a user wants to expend in the search through a solution space that spans across different methods (e.g. decision trees, neural networks, regression, etc.). There will always be a need for a human to add context and domain-specific knowledge, and these AutoML algos are figuring out where and how to add that in the sequence. For example, in Azure AutoML a user can decide a) which variables to use in the modeling, b) which, if any, algorithms to exclude from testing, c) data guardrails such as defining protocols for training/testing data splits, etc.

While imperfect, I am impressed with where AutoML is today. Not only does it provide a guided, no-code way to automatically try different methods to optimize a selected performance metric, it provides useful explanatory tools to interact with the developed models and get more insight. We have come a long way, but the road ahead is far longer than the one behind us. And that is incredibly exciting… and a bit disconcerting.

First the exciting part. I can see AutoML rapidly developing to encompass more models than could feasibly be run by a data scientist on a given project. New methods and techniques come out every month – it is impossible for a single data scientist or even a team to keep up. But an AutoML’s library of available methods can always have the latest updates and methods. Further, these models will execute faster than any data scientist could hope to run due to high parallel computing on clusters of servers and marshalling of model executions by AutoML. The biggest advance I see coming is an ever-enhancing ability of AutoML to explain model output to a non-expert audience via visual aids, graphs, explanations, etc. The ability to encode the collective expertise of mathematicians, experts in certain algorithms (e.g. decision trees, neural nets), user experience designers, and storytellers into a few clicks – will yield an incredibly powerful tool for turning data into insight and prediction.

The disconcerting bit is thinking where this leaves the army of data scientists, predictive modelers, and machine learning enablers that has been created over the last decade? I am not going to predict that Ai will take the jobs of its own creators. But there will be an impact, and I think a good one. Historically the training emphasis in Ai has been way too much on technique and coding. Ai/machine learning/predictive modeling – is like everything before it, all about telling a compelling story. AutoML can now include the talents and creativity of non-coders in telling that story.

Thinking that some digital robot is going to be running these models in the future is a mistake. Companies are a) not likely to readily outsource their data in today’s digital security environment and b) not going to rely on a blackbox creating more blackboxes. We will need trained and talented humans running AutoML, making sense of the output and all-importantly, re-telling the story they learn to the relevant human audience and making connections in a way only a human can. AutoML has the potential to take the drudgery out of production while being more inclusive of different talents, just like the machines of the industrial revolution.


Leave a Reply

Your email address will not be published. Required fields are marked *